Nigel Cassidy (NC): Is AI paying for its keep in your organisation? Here's ideas for making responsible, profitable and more mature use of AI. I'm Nigel Cassidy and this is the CIPD podcast. Now everyone seems to be using, testing, or at least dabbling in artificial intelligence. But all those must-have AI platforms and solutions are costly. The pressure is on to get tangible returns, achieving a state that's being termed AI maturity. And people professionals need to be at the heart of this, because while AI agents may not be employees in the legal sense, they're very much an extension of the workforce doing complex, autonomous jobs, with implications for job roles everywhere.
So what does AI maturity mean in practice? And to use the old cliche, if achieving it is a journey, not a destination, then how can you tell where you are, how to go faster and avoid getting stuck in the AI mud?
Well, with us firstly from Nationwide Building Society, we have an AI people and comms leader firmly focused on what AI adoption may mean across his organisation and assuring it's brought in responsibly. He's Graeme Burns. Hello, Graeme.
Graeme Burns (GB): Hi there, Nigel. Great to join you.
NC: Next, a specialist who works with HR teams implementing AI solutions with clients including the likes of the Alan Turing Institute and the NHS. He's also taken pains to tell me he hails from the UK West Midlands. In spite of his transatlantic accent, it's Ben Redshaw of Orchard Tree Consulting. Hello, Ben.
Ben Redshaw (BR): Hi, Nigel. Great to be with you.
NC: And adding insight on all things AI in the workplace with a senior advisor dedicated to researching and explaining how technology can and is transforming how we work, it's the CIPD's own Hayfa Mohdzaini.
Hayfa Mohdzaini (HM): Hi Nigel, great to be here.
NC: So, getting the machines to work smarter, not always easy. Graeme, I just wonder what's the strangest or most memorable encounter or exchange you've ever had with AI?
GB: Goodness. Well, I'm going to take you back to an exercise that I used to do a couple of years ago in Nationwide when I was just first introducing AI into the business. And I would take the HR teams through a role play that I would do with ChatGPT. And it was before ChatGPT had actually declared it had memory or context memory as such. So it couldn't really remember outside of chats. And I would do this role play where I would take an example where I would basically pretend to be a colleague who was struggling with mental health challenges and mental health problems. And it would guide me through this thing. And people were amazed. Obviously, it's kind of a magic trick, magic box of ChatGPT talking, assuming that I'd recorded the whole thing.
And so eventually I was actually, it would say to me, why don't you do this? Or why don't you go out for a walk? And I'd say, yeah, sure, I'll take my dog for a walk or whatever as part of this role play. And then a few weeks later, suddenly it remembered that I had a dog, right, when I was doing this role play. And it kind of like before I even got to the point where I had actually sort of introduced the issue, it had said to me, hey, why don't you take your dog out for a walk, Graeme? And at that point, I was kind of blown away because ChatGPT didn't have any memory declared at the time.
And I interrogated ChatGPT and said, how can you do this? And it said, I shouldn't be able to do this. And I guess it was a really interesting introduction for me because, it was a good example of how LLMs can hallucinate and also how potentially they might stretch the truth a little bit in terms of trying to tell you what's actually going on with the information that you're giving them.
And it opened the door to me for kind of a lot of the challenges around LLMs and so on. And actually, I was unable to kind of use that as part of my ongoing example to explain to people about the mysteries of how you use AI and some of the challenges that you might find in using it.
NC: So Ben, have your experiences with AI always been quite as Zen?
BR: Actually, they have. I'm going to give you two light examples that I had with it. The first was early days of ChatGPT coming out. It gave me a stunningly accurate set of reasons why my American football team of choice never wins. So it was, as a long-suffering fan, it was I could validate that everything it said about ownership and the franchise was correct.
And then more recently, I tell you, it really helped me prepare Boxing Day lunch. We had my daughter and her partner over, and I wanted to prepare something, a couple of different things that had different cook times. When they arrived, I didn't want to be tied up with prepping everything. It actually gave me a wonderful schedule, including me uploading photos to show relative levels of doneness and how much fat I needed to skim. So two examples for me that the lighter side of AI.
NC: Very good. And finally, Hayfa.
HM: For me, it was in my moment in need. This was years before ChatGPT became mainstream. I was in Japan. I was hungry. I couldn't read the ingredients in the food. And my brother who speaks Japanese is not contactable. And then, so I thought, oh, hang on, that's this translate app. Let me have a go. And it turned out that actually you could take pictures with it, which was a massive bonus because with Japanese writing, it's pictorial. I took a picture of it and, you know, lo and behold, I could tell whether I could eat the food or not. And I wish I'd tried this a week earlier when I started my holiday, not at the end of it.
NC: Lots of different uses of AI. We've all got a story, I suppose, but Hayfa mature AI. What do we mean by that?
HM: So two ways to read this. If you say mature AI, you mean the technology is mature. But if you're talking about AI maturity, often we mean how mature you are or how mature your organisation is in getting value out of AI, where you are in that journey. Are you an early, intermediate, or advanced adopter of AI?
NC: And so, Hayfa, I mean, we're kind of assuming there that mature use of AI is a good thing. I mean, is there concrete evidence that organisations that are mature in their use of this technology are more successful, more profitable?
HM: Obviously, by definition, if you can get value out of AI, you should be able to be more successful. But using AI alone does not indicate maturity. A recent CIPD survey on AI skills, we found that although 6 in 10, almost 6 in 10 organisations are using AI, when we measure their organisational readiness, actually most of them are actually low readiness, so they don't really have the infrastructure, they don't have the workforce capabilities to actually get the full value out of AI.
NC: Okay, so Graeme Burns, I'm wondering, what is the maturity mission for you? I mean, you're one of this ever-growing band of HR leaders who also wear an AI hat. I mean, what are you trying to do, I mean, at the Nationwide, this sort of quest for mature and responsible use of AI?
GB: And I wouldn't go as far as saying it's a quest for a mature use of AI because we're very attached to our purpose in our organisation. And which is banking, but fairer, more rewarding for the good of society. And every organisation or most organisations will have that purpose. And I think probably one of the lessons that we learned quite early on, Nigel, was to start to think about whether the AI that we were actually using and deploying in our organisation would actually serve our purpose.
And I think that one of the challenges a lot of organisations have had with ROI is that quite often In the very heavy sales environment that AI companies are pushing and software companies are pushing with AI aspects to their software, there is a scramble to deploy stuff that may or may not actually serve the purpose of your organisation. And that's one of the real very simple learning points that I guess we took on board at a very early stage.
Because once we started doing that, we found that we were able to kind of pick up with software that was much more aligned and we could actually explain to people why they might be using it within the organisation, which seems again like an unbelievably simple thing to be saying. But actually, if you go around and you look at a lot of the AI that's being deployed in organisations at the moment, you're probably going to find that some of it probably doesn't serve their purpose. It might kind of help out with individual tasks or with individual components and so on. But When you've got a very scarce resource set to do this kind of stuff responsibly, you really need to be much smarter with where you deploy that resource set and start thinking about like, actually, is this stuff really going to help us get to where we want to get to in the end?
So I guess we don't tend to think of AI as like we're trying to reach a point of maturity in AI. It's like, how can AI help us deliver our purpose? And I guess when I speak to people who are trying to do this and getting lost in the maze of it, this is the first point I often come back to them with. It's just to say, what's your actual purpose? What's this software doing for you? Are you able to explain that to your people in a way that they understand it and therefore that they will buy into whatever it is that you're going to deploy?
NC: So Ben Bradshaw, do we really know yet the general business areas, the organisational functions where AI agents have matured, where the use of them is making a better return on investment?
BR: Yes, we do. And I think what's important to think about when we talk about AI maturity is we're very early in this wave of the technology advance that AI is bringing us. You know, we're still at the first days of having a website. We had no concept yet that we would have apps, that we would order dinner on our phones. And I think about AI the same way. We're very much at the early stages of the technology. And a bit of data I was looking at recently had two-thirds of companies either piloting or implementing, beginning to implement AI in their HR processes, with a third who are on the further end of the scale, operating or optimising with AI. So very early on.
But specific to your answer of a return, yes. And IBM is a really good example. They started their AI journey in 2017. They have more than 250,000 employees worldwide. And they were looking to tap into technology to free up some HR resources to redeploy. Their HR professionals were handling more than one and a half million employee questions on an annual basis. And so they developed a tool called AskHR, which was one of the early versions of the digital assistant. And it was a large language model trained to answer employee questions about policies and benefits. So that was 2017.
Fast forward to 2023, which they released results in 2024 on this. In 2023, employees interacted with AskHR more than 10 million times. It successfully fielded 94% of the questions posted and transferred the more complex ones to in-person professionals. It processed 765,000 employee transactions a year, and it also reduced the time to process them. So it took things like generating an employee verification letter from three days to two minutes. And the final measure they would point to with success is that their HR customer satisfaction measures as part of their net promoter scores has risen from plus 19 to plus 74. So there's a return in there. I think they would articulate it.
NC: And Hayfa, things seem to get trickier where human judgment is required of the AI agents. You can't set it and forget it.
HM: That's right. So you need to assess the risk level and where human judgment is essential from a legal basis or whether it's essential from a reputation point of view, you do need to build the human verification into the process.
So for example, in our action research law firm, they built in a, they have this mantra called verify, verify, verify. So whatever their legal chatbot produced, the juniors would expect it to verify and their seniors would also model this behaviour and coach their juniors as well.
NC: Because in fact, Graeme, these custom built agents that do a job within an organisation are pretty difficult to build and very expensive. So I wonder if you could kind of talk us through sort of what is the process that goes between seeing something that goes on in your organisation and the feeling that AI can make a difference, but then finding that the pilots don't quite work out or people sort of drop the idea after they've already spent a lot of money on it. How do you see how do you start the process and then see it through?
GB: We went to quite a lot of lengths to build a kind of AI governance setup, which I think a lot of big organisations will have themselves at the moment, based off our strategy and a series of principles, which we extended to cover off some additional elements like sustainability for ourselves, because we felt that was a really important principle. And then what happens is typically the business will bring us use case ideas.
They might be brought to them by a software house who've added an AI component, or it could be that they have an idea themselves for an innovation. And we will go back and look at a library of other AI tools that we've already built and we'll wonder whether any of those components can be reused. Or we might actually assess the actual tool itself or this piece of software.
More we developed an AI council, which is a kind of multidisciplinary council. So across that council, we have representatives from legal, data privacy, various technical assurance, strategy, security, HR, et cetera. And we assess the use case. Effectively, the use case will go through a process of the person will bring it, we'll ask them to complete a questionnaire with 50, 60 questions on. And that gives us enough information to be able to start kind of looking at whether the use case will work or not.
And then we go into a proof of concept phase, and that proof of concept will effectively kick the tires on that particular solution. We have a model management team, which we've developed over a number of years, which basically are the kind of real experts in how these AIs work. They really understand what's going on in the black box. And sometimes, depending on the nature of the AI solution, we will send that management team in to the supplier to go and actually interrogate the tools and systems and controls around it to try and understand how it actually works.
So it's actually quite a lengthy process. It's very difficult to do AI responsibly. That's my kind of current view. It's not something that you can rush in, particularly if you're managing multiple use cases. We have 130 use cases at the moment in the organisation. And we are very conscious about kind of our organisational reputation, very conscious about how the security aspects of this, obviously, as you might expect from a financial services perspective. And so all those things are really important, but it's not just important on day one, they're important on an ongoing basis.
So it's really important we then establish who's accountable for the system. Do they understand its outputs? Who's checking on the controls? Is all the training in place? All of these things which are like super important, not just on day one, but on an ongoing basis for these tools because they move over time. There's a thing called model drift, which means that quite often the model, the AI models will move and evolve as time goes on.
NC: So Ben, I'm sure some of that you recognise. Talk about the sort of things that you've seen as organisations try and bring in more AI and make more mature use of it.
BR: There's a process that organisations go through as they take the technology on based on their level of maturity. I think we're, you know, we're realising. Graeme's point around governance is very important. There's a framework that I keep in mind that works for me. It's called the FACT framework. It's fairness, accountability, culture fit, and transparency. Fairness is ensuring that the AI tools don't have adverse impact on anyone. Accountability is ensuring that we all know who owns the tools. So if we see an issue with them, we know who to go talk to. Culture fit is how we're educating everyone on how we bring these tools into the workplace and have them be a useful. addition as opposed to something that gets in the way. And then transparency is how we use these tools, what we use them for and what we don't use them for.
So we're at this very important point, not unlike when we introduce the internet into the workplace in my mind. Same kind of decisions that we're having to make. And we want to manage this without constraining innovation.
GB: It's also a learning process for most organisations. Establishing that feedback and learning loop not just when you go through the original use case assessment, but getting feedback from the users themselves so that you really understand what's going on the ground, feeding that back into other projects, updating your overall assessment process, making sure that actually you're complying with regulations because the regulations in this space are changing all the time. And just trying to understand particularly some of the security issues because that's a real concern, obviously. We want to make sure that things are really, really important. So we feel that learning loop and having that learning loop established is really important.
So when you think about HR and the role of HR and establishing the right culture in an organisation, being able to create an environment which is safe and that people can speak up safely and talk about actually the concerns that they might have is super important. And it's a really critical role that HR can play both in establishing the culture, but also establishing a kind of learning culture that enables that feedback loop to work successfully.
NC: In fact, perhaps we can ask Hayfa about that process as she's seen it go on in a number of organisations. I mean, as we've heard, while getting the AI to work is a tech piece. I mean, it's not going to mature without, I mean, all kinds of things, like behavioural change in your people. Maybe those are harder bits than the actual technology.
HM: So if we think more broadly in terms of technology, so if we go back to our CIPD guide on how to choose the right technology for a business, we talk about a concept that academics discuss around alignment of your people with the technology, alignment of people, technology with the task, the culture and the structure. All that needs to be aligned in order to get that value of whatever you're investing in, whether it's AI or the internet or something else. So by structure here, we mean by policies and processes and your organisation's structure. All these need to be aligned.
NC: So let's ask both Graeme and Ben a bit more about that kind of process of working ideas out, bedding them in, getting people on side.
GB: So I think that what we've learned from our implementations is that this is very different to your average IT implementation because effectively you're introducing A participative intelligence into your organisation. It's not like you're just deploying a piece of IT, you know exactly how it's going to work on day one and you can expect certain kind of outputs from it because when you're using AI, what you're going to find is that actually on day one it does something, by day 100 it's doing 1000 different things that maybe you weren't anticipating. Right, so that is a very different way of looking at a project and building a project from scratch. Quite often the deployment could be actually relatively quick, but it's all the after deployment implementation time that's actually really, really important.
So having that employee involvement, having that base to build from, making sure that you have managers who can work quite closely and coach and facilitate that change within the organisation is something that we've found is going to be super important. Again, it's a really important role for HR to play in getting managers ready and giving them the skills to be able to coach people as they use this new technology.
And that's a very kind of different way to a system that used to that would work in the past and it would very clearly defined. You'd put in A and B would come out. Like when you look at some of the projects that we've had, for example, with our software engineers, which we've done a piece of research with the CIPD on, what we found is that initially engineers were doing certain things with it, and then actually as time went on, they were finding new ways to use it, new ways to improve it. And actually the peer-to-peer learning would have been a really, really beneficial thing to have set up as people went on that voyage of discovery at different paces.
And again, it's a really useful thing for HR practitioners to think about is what can they do to support that transition and that change implementation cycle? You know can they get in, support the managers to have the skills initially? Can they also potentially set up peer-to-peer learning? Can they facilitate some of that activity? Can they do the reflective practice? Can they try and make sure that the organisation is truly learning from these implementations?
BR: I think There's an important piece to add as well as it needs to be appropriate to the culture of the company. And every company's culture is different. So let me give an example.
Microsoft introduced AI into their employee engagement measurement process some time ago. And they collect two types of data, active data and ambient data. And active data is kind of what you would expect. It was an annual survey. It was daily pulse check surveys with randomly selected employees, did exit interview feedback, you know, that kind of stuff.
But then the ambient data is interesting and potentially controversial in other companies. So their ambient data, they look at passive signals. patterns of work that they can collect across the Microsoft 365 suite. So they can get a look at people's meeting loads, for example, the amount of after-hours work, the amount of focus time they have versus collaboration time. They can look at tone and emotion. And so that is clearly something that within the Microsoft culture works. They're good with that.
But that level of insight for other companies would be uncomfortable for their employees. Even though we might get better insights into turnover and burnout and things like that, the ambient data starts to feel a lot like listening depending on the culture that you're in.
NC: And Hayfa, this is heavy stuff for people in HR, perhaps especially those older hands, who were around before all this was a priority. I just wonder whether you could say any more about the role that people, professionals, can best play in ensuring that AI is used in a mature way.
HM: So the way I would urge people, professionals to think is to see where AI could impact your employees, whether it's in terms of their training needs or, in terms of their job role, job tasks, in terms of their terms and conditions.
So wherever AI intersects with employment matters, we can intersect with people, that's where you can come in to help out, get the most out of AI for your business. So Graeme earlier touched on, dipping into the change management toolbox, seeing where you could facilitate conversations, peer-to-peer learning. Those are examples of change management skills that you could dip into. Another area is around employment law. You know, HR professionals generally have a good handle on employment law. And where we want to look at AI governance in the organisations, that's where you can come in and provide that unique view in terms of your knowledge on employment law as well implications on people and the workplace.
GB: I want to add one more thing to Hayfa's list of kind of HR skills there, which is just about trying to teach the organisation by setting an example around systems thinking and starting to think through what the second, third and fourth order effects of any kind of AI implementation might be.
So I'm going to use a brief example again with the software engineering activity that we had 800 engineers using AI tools. If we had maybe thought about what the implications of that deployment would be, we may have anticipated some other things. For example, gender adoption, there's different speed of adoption of these AI tools, potentially across genders. There's quite a lot of research about that.
So you need to be thinking through, well, what could the potential impact of that be in my organisation if I put a tool into an organisation and certain people take it up quicker than others? Or maybe one team has the tool and another team doesn't.
NC: That sort of thing's been an issue in recruitment already, of course, isn't it?
GB: Yes, and that speaks volumes because I think recruitment has not necessarily done the reputation of AI wonders and is still continuing to be an ongoing challenge. And certainly something that I think that a lot of organisations could go do well to go back and look at their deployment of AI. Did they really understand what they were doing, what their second, third order impacts of using these tools has been? And I think we're now at a point where we could probably do some reflective practice on that actually and look back and think, yeah, are we actually getting the quality of candidates we want? Who is being sifted out? Are tools being used that are kind of preventing that? Have we ended up with homogeneous candidates because the tools just promote certain kinds of people who have great CV writing?
And I think it's this kind of intellect and holding of the mirror that I think that people function can do in an organisation that other functions aren't necessarily going to be able to do very easily. And that's why I'd encourage HR people to kind of find a way to step forward into these AI implementations and start sharing these things and concerns around diversity or challenges around what the unintended consequences might be so that we can learn from that.
HM: So I do urge you to look at our new resources on the CIPD website. We've got a new AI governance guide. It's got a long title. It's called How people professionals can develop, deploy and use AI in an ethical, legal and sustainable Way. So having good AI isn't just about having good security and ensuring fairness, actually, you do need to have inclusive growth as well built into it.
NC: Ben, two or three things that people can start with to try and make some progress on this.
BR: First is if you're at the early part of your AI journey, it's helpful to think about what I call the AI sweet spot. It's that intersection of the kind of work that AI does really well with the prevalence of those tasks in HR.
One of the reasons that we're seeing these tools, these chatbots, which are used for answering policy and benefits questions is because that's an intersection of you got a bunch of information, and you got a bunch of people that need to get something out of that information, AI streamlines it. So I think that's the first point.
And then the second point I'd make is, what you do with the efficiency is going to be driven by the HR operating model that you have, which is driven by the culture of the company that you're supporting.
So if you're Klarna, you're on the, we're going to cut as many heads as we can to increase the, increase our operating income. That's their model, right? Whether you think it'll be successful or not. On the other end of the spectrum, you've got very high touch HR models because the nature of the workforce of course, requires a high touch.
So every HR leader is going to need to take a look at AI and decide where is the sweet spot in our organisation of where it's repetitive work that AI does really well, and how's that going to be received within the operating model that we have as HR and how we deliver service to the business.
NC: Great. So a tip or two from you, Graeme.
GB: I think that there's probably two or three lessons. I think first of all, buy beware at the moment is a good strap line to have.
NC: It's a jungle out there. An AI jungle.
GB: It is, absolutely. And it's a trillion dollar bet by some very large companies that promise the earth. And I do not necessarily, like in our experience when we have gone in and kicked the tires on some of these models, they do not do what you expect them to do. And they will tell you that they do these things, but they don't necessarily, right? So be very careful in what you buy and what you get into.
I think there's an ongoing challenge that will be, we're in the range of free AI at the moment. And once these organisations float, they are going to want to get some money back. And I think it could be quite, you could quite easily end up replacing some individuals or tasks in your organisation with AI, which, you know, could have automated or done in a slightly different way. And then you'll be beholden to these organisations. So be really, really careful about kind of, you know, who you get involved in. And the key to being careful is actually going out and connecting with and interconnecting with other people who are doing this work. And particularly if you're an HR professional, I would encourage you to use the CIPD and use your network within your HR world and talk to other people who are doing similar stuff. I kind of use this term of like reduce, reuse, recycle.
So I would say if I was talking to the HR team, first of all, how many data sources have you got within your HR function, for example? Is there any way that you can reduce those data sources before you start getting into a world of putting AI on top of it? Because otherwise you're going to create some crazy plumbing and it's going to be much, much more complicated to manage. When I say reuse, it's like there will be other people who are doing really good stuff in other organisations or in your organisation. So go and find it. Use your network and try and get those components and reuse them, because then you won't have to go through the pain of learning what they've already learned and you might be able to kind of tailor that to your particular context.
And then if you do actually manage to deliver some of that stuff within your organisation, like recycle it back into your organisation or out into your external network, right? We're all learning at exactly the same time on how this AI deployment works. Just because it works in your sector or whatever, there's no shame in going out and speaking to somebody in a different sector and saying, hey, look, we did this. They may not be in competition with you or they may be in competition with you. Like share it, right? Because we're all learning about what some of these challenges are. We're at the same point in this point.
NC: And so briefly, anywhere people can go to really absorb a bit more background and help on this.
HM: Well, what I would say is after the podcast, have a look at the CIPD AI in the workplace topic page. We've got lots of resources. insight articles and guides to help you on your AI maturity journey.
GB: Do you know, I'd encourage people to go back and listen to episode 43 of the CIPD podcast with Allyn Bailey, because I thought that was a really excellent view of the broader picture of AI. But also there are three or four really good AI maturity frameworks out there. If you already work with IBM or AWS, both of those organisations have exceptionally good maturity assessments that are benchmarked against other organisations, particularly IBM. And they will give you a sense of how well your organisation is doing across a number of different areas.
And then there are a couple of other UK-based organisations, Sopra Steria, I know who work widely in the public sector, have a self-assessment tool that's available. And there's a charity called Blueprint for Better Business. They have a different take on it, which is much more how do you think about AI in terms of social good and the kind of things that you need to be thinking about as an organisation in terms of whether AI will actually help you deliver your purpose?
NC: Well, it's been great to speak to the three of you. So a big thank you to Graeme Burns from Nationwide Building Society, Ben Redshaw, Orchard Tree Consulting and the CIPD's own Hayfa Mohdzaini. It's pretty clear we've heard today on our maturity. I mean, it may start with mastering the technology, the data aspects, but it really needs everybody to understand what the AI is going to do and get behind it strategically, you know, starting from the top with leaders and there's all that governance, workforce capability and that all important cultural buy-in. But certainly some brilliant pointers we've had today.
Where can you get more help? We've already heard a lot about that. As part of the government-funded Bridge AI Innovate UK, the CIPD has worked with partners to create some useful insights and resources on challenges around AI adoption and the opportunities it can bring. You can find that if you look up Bridge AI Innovate UK. That should find it online. And meanwhile, the CIPD has already put out a series of case studies and inside articles on effective and people-centred AI adoption, and it's got more to come out on AI skills, planning and governance.
So watch out for that. We're back next month. Until then, from me, Nigel Cassidy, and the whole podcast team, it's goodbye.