AI in the workplace: what’s ethical?
Explore the benefits of using AI in the workplace and the ethical and governance issues
Explore the benefits of using AI in the workplace and the ethical and governance issues
How should workplaces regulate the use of artificial intelligence (AI)? Following the rapid advances in AI and its increasing mainstream use, employers must future-proof their business and keep up with the pace of change. AI has huge potential in the workplace, in HR processes such as recruitment and training, as well as employee engagement. But challenges, including ethical risks, data security and accountability, need to be overcome.
Our panel of experts discuss how to manage AI use in your organisation.
Our panel includes:
David D’Souza, Membership Director, CIPD
Dr Abigail Gilbert, Director of Praxis, Institute for the Future of Work (IFOW)
Andrew Stephenson, Chief People Officer, Equiniti
Chaired by Katie Jacobs, Senior Stakeholder Lead at CIPD
Katie Jacobs: Afternoon everybody, it is midday, so I'm going to kick us off. Welcome to this CIPD webinar. My name is Katie Jacobs, and I'm going to be guiding you through the next hour or so as we discuss a topic that I know could well be both exciting you with the possibilities, but also keeping you up at night with worries about the potential risks and the sheer pace of change that we're all having to live through. And that's of course the impact of AI on the workplace, and particularly what responsible and ethical use of AI looks like. Joining me to talk about this critical topic for the people profession, I have a panel of people who have been thinking deeply about how H, how AI is changing the way that we work, and how the people profession needs to respond. So today, I'm pleased to be joined by David D'Souza, David is Membership Director at the CIPD. Also joined by Dr Abigail Gilbert, Director of Praxis at the Institute for the Future of Work, and last, but not least, we have Andy Stephenson, Chief People Officer at Equiniti. Thank you all very much for joining us this afternoon. So as ever, before I get into the topic, I'm just going to quickly run through some housekeeping for you all. First up, to let you know that this session is being recorded, it will be available on demand via the webinar section of the CIPD website. You'll be able to access all our previous webinars there, as well as find out about anything that is upcoming. If you'd like to submit questions to the panel, and I'm sure you've got a lot, and I really would encourage you to make use of the people we have here today, then can I ask you to please type into the Q&A box rather than the chat box? As we won't be monitoring the chat box that closely. So use the Q&A box if you've got a question that you want me to put to the panel. However, if you do want to use the chat box to chat to your fellow attendees, please do use that, and just be sure to set it to everyone, so everybody can see your thoughts. Just to remind you of some CIPD benefits, CIPD members can access individual legal advice via our HR informed helpline, which is available to you 24/7, and another one I want to flag is our wellbeing hub and helpline, available for members in the UK and Ireland, and you can use this to access free help and support via telephone or online consultations with qualified therapists, and that's all provided by Health Assured.
Stuff is moving really, really fast right now, I know everybody feels particularly kind of overworked, overburdened, so please do reach out for help if you need it. I also want to flag a new CIPD guide, Preparing Your Organisation for AI Use, which includes lots of practical guidance, focusing particularly on the use of generative AI tools, like ChatGPT and covering both how HR can create a policy or approach for the use of AI in the organisation, and how HR can apply different AI tools to areas of the employee life cycle. I'm sure one of my colleagues will put in that link in the chat for you, and other relevant resources that might help you as you think about this space. So, onto the topic of hand, at hand, obviously because this is the rules now, when you talk about this saying, I had to ask ChatGPT to write an introduction for me, and this is what it said. It said, "get ready to embark upon an enlightening journey into the world of generative AI, and its transformative impact on the workplace, let's unlock innovation, efficiency, boundless potential together." I did think that was a little bit much, and I did ask it to tone it down and make it a little bit more British, but it didn't quite get what I was asking for. So the next bit is all me. There's no denying that the rapid pace of development and the wide accessibility of AI tools are already having a transformative impact on the workplace, and this is only the beginning. The challenge for people and professionals is how we balance the opportunity that such tools offer in terms of things like efficiency and productivity with the risks they present around ethics, data security, diversity and accountability. So what is the appropriate balance of embrace versus control? During this session, we will discuss the implications and opportunities of using AI took for [people] professionals and the workplace more widely. We'll touch on some use cases and tackle some of the emerging challenges around ethical and safe usage, and we'll explore what the landscape looks like currently in terms of regulation. How it's going to work is, we're going to hear from David first on a general overview and a bit about the CIPD position on this, he's going to set out some principles to guide our thinking. Abi is then going to set some context, talk about the wider picture, she has got slides, I'll just say that David and Andy do not have slides, so don't worry, nothing is broken there. And finally, Andy, will share some insight into how Equiniti are already using AI in, in HR processes, and how he, as a chief people officer, is thinking a bit about the risks and the opportunities ahead. And then we'll have a panel discussion. So please do get your questions in throughout. But that's enough from me for now, I'm going to hand over to David to kick us off, thanks, David.
David D'Souza: Thank you, Katie. Hello everyone, whether you're joining us live or whether you're joining us by catching up with the recording. I'm delighted that you're taking part today, because what we're talking about is something that genuinely I think will have profound impacts on the world of work. I've spent much of my career telling everyone to calm down and avoid the hype, and actually, things probably won't be as different. This is one of those rare exceptions. So this is technology that will be probably be transformative for the world of work, in the same way that the internet was. Probably in the same way that mobile phones has been for people. It will be everywhere relatively quickly, and therefore we've got a role as a profession to understand the implications for us, but also for the organisations that we work on, and society more broadly. And that's a challenge to juggle something that's changing as fast in so many different dimensions. But we came through the pandemic showing that, as a profession, we could be really responsive, showing that, as a profession, we could help organisations through a really complex change journey where things were uncertain. And this has got pretty relevant parallels in that. I'm using that as an example, not because I think this is all bad, but because actually, the skills and capabilities that you might need will be very similar. So, at the CIPD, it's our job to help navigate you through that as best as we can, and to work with and learn from you as we're going through that journey. I think it will have profound implications on an economic level as well, societal level, and there will therefore be ramifications for just about every organisation. So whether you think your organisation will be directly impacted immediately or not, down the line, it'll probably have impacts for the entire labour market, wherever you are in the world. So the world's changing fast, there's probably three things that we can't do, one, we can't be left behind. So as a profession, it's incumbent upon us to keep up with the pace of change. It doesn't mean that you all need to understand the absolute depth of how these things work and fit together. But it does mean that you need to understand the possible ramifications and the way your organisation might react. So there will be a range of people listening to this, some of who regularly use this technology and have been doing for the last few months, others [have been there] kind of going, what's all this about? I would urge you to, at the very least, go away, have a play and experiment if you haven't done prior, and learn about the capabilities in a bit more depth. You need to do that to keep up. So in the same way that 20 years ago in the profession, you probably didn't need to be that proficient with data, but now it's very hard to have a successful career without having an understanding of it. This will be the next frontier. So get on the front foot, learn about it, it's got a really easy kind of level of access. So if you can use Facebook, if you can use Google, you can use this technology at a base level. So get over that kind of fear if you have it, and start learning about it, and then start thinking about the implications on the profession. Secondly, I guess, I want to highlight, as Katie said, that there's opportunity and there's risk in this, and we need to understand both of them. So as a professional body, we are urging people to understand it, but we are also urging people to think about the risk and the opportunity that sits within. So there is no point launching off and doing things that we shouldn't do, equally, we can't pretend that something isn't happening and opt out of it altogether. So there's a lovely line from Jurassic Park which has been my go to for about ten years, talking about technology in the work place, which is "they were too busy thinking about whether they could do it to think about whether they should do it." And as a profession, we've got an absolute obligation to think about what's possible, but also to think about restraint, what line should we never cross? How should we keep people at the centre of this change? How do we make sure that people are valued? And that involves us, I think, experimenting quite a bit in the short term to help our own skills develop, to help people around us develop, to understand the opportunity. But also for us to think long term about the implications of our actions, the decisions that we're making for the people that work in the organisations that we support. So we've got three key calls, I think, for the profession. One of which is to set guidance based principles, so it's going to be hard to do hard and set rules, but as you'll see in the guidance that Katie's suggested people download and do have a look at our assets after this, it's really important for people to go, look, here are some broad principles within an organisation that we're working to. And for you to understand, actually, the principles that you're working to as well, so how can you be transparent? How can you be accountable? How can you take into account things like data and security? Because it can't just be like the Wild West where everyone's saying whatever they like, it has to be down [inaudible] to a degree. Well we can't be so risk averse that we're not taking that opportunity. Secondly, engage and develop with this technology. So experiment with it, collaborate, whether it's in your organisations or others, use the communities that we provide you with access to, to get a grounding on what other people are doing and what they're finding. And make sure that you're also doing that cross [directorate], this isn't an IT problem, this is a workplace opportunity, and that's a very different framework, that you can't leave it to your IT team to look at it, you can't leave it to your finance team to go, I think we can make savings. We're the ones that need to understand the impact it has on people and the potential impact that it therefore has for skills, capability, development, reward and all of the other elements. And finally, think strategically about it. So we need to work out where the positive outcomes are that sit within this technology, think about the entire model end to end. So for those of you with an organisational development background or, or kind of bent, you'll remember the Burt Litwin Model starts with what's happening in the external environment, and then helping your organisation orientate to be the best fit for dealing with that environment. And we need to think about that short and long term. How do we utilise, integrate, understand this technology to make sure that it's making a difference positively for people now and ongoing, what are the implications in terms of skills? What are the implications in terms of reskilling? And the final thing I would say to people that, I guess are coming new to it, is that I think there's a degrees of preoccupation with what it will mean in terms of administration. But actually, this has implications for knowledge work, this has implications for creative work as well, and for analysis. This is actually an opportunity for lots of roles that form the heart of many organisations. It will impact decision making and organisations as well, being able to aggregate data, in a [sense make of it all] more rapidly. And we need to reflect really quickly and calmly with that kind of long term view and what that means for how work get done in organisations. And where is it essential for you always to have people in the loop? Where can we use technology to improve the quality of work that people do? So, how can we get good outcomes in the workplace from them? And if we can do that, and if we can do that together, so I'd stress that community piece as well, if you're puzzled by this on the far side of it, get into our communities, get into our groups and start talking to people about it. Go to your local branch and have a conversation, and use the connections that you have available to us. But if we can do that, then we can get into a good place. So, we'll keep highlighting resources as we go through, we've not only done a kind of guide to policymaking in this space, but also around workforce and recruitment planning as well. So, please do make sure of that. I will leave you in the capable hands of the other panelists, I look forward to conversations being quite challenging and the questions being challenging, put us on the spot properly. But we're all working this out together, the best way to do that is as a community. But everything we need to do has to have that ethical lens on it, the responsibility lens on it, but finally, we need to be looking at striking at the heart of what makes organisations a good place to work and makes good places to work productive, thank you.
KJ: Thanks, David, that sums it, it up really, really nicely, I think you've set Abi up really well as well with the kind of the themes that you were talking about. So we're going to hear now from Dr Abigail Gilbert from the Institute for the Future of Work, and I do encourage you to get your questions in, we've got a few there, but please do get them in and don't leave them all until the last minute, because otherwise, they might not get answered. Over to you, Abi.
Abi, you're on mute.
Abigail Gilbert: Not mastered the most basic aspects of technology, and yet, I will come here to prophesise as to how you can do it.
KJ: It's only been three years, Abi, don't feel bad.
AG: So I was just saying, thanks very much for having me here today, nicely teed up there by what David's talked about, although I think the focus of what I'll be speaking to you about today is how to forefront good work through transition. So thinking about that responsibility for future workforce and how to get the best outcomes while you're adopting both generative AI and earlier tools, which we shouldn't forget about the impacts of now that this is all the rage. And it's good if it can mobilise that concentration and focus on the governance of AI and the workplace more broadly. So the, a little bit about the Institute first and if you're wondering who we are, we were founded following a parliamentary enquiry into the future of work that ran between 2016 and 2018. And this brought together academics from across economics, law, technology, to think about what was going to happen to the future of work and the changes to work quality that were arising at the time, and we look at that in an ongoing way. So we try and bring together policymakers, civil society and, and organisations and industry to build a fairer future through better work. Next please. And we do that in partnership with a number of different academic institutions, next please. And we are currently doing a major research project that's trying to understand how AI is disrupting labour markets, how that happens within the context of firms and their human resource management approaches. And what that means for work as wellbeing, the implications for what we need to do with institutions and how to redesign institutions to promote better outcomes, and then ultimately, yeah, what those policy infrastructures should look like. So just as a quick [juicy] release of some of our early findings in that, we found that 80% of firms have deployed automation the past three years, and the statistic was the same for physical and manual automation of tasks. So that's, you know, sort of substitution of work that either required motor skills or, or thinking skills. So this is obviously going to drive a lot of churn, and there'll be a lot of support needed for workers through that transition, and that also had implications for what's done within organisations. And we found that there are different approaches to the way that this decision about how to use novel technologies is shaped. And that happens to have some kind of regional dimension, with places with lower regional innovation and readiness are more likely to make decisions that reduce job quality, or delete roles. So there's a kind of place based factor there. So stay tuned for more on that. But central to our findings is that human resource management practices are important in determining the outcomes for quality of work, so do recognise and prioritise your role in shaping this transition, it can be really important. Next slide please. So whilst I've mentioned there that there's been these automation of tasks, and the public conversation around risks that are emerging as a responsible, but generative AI and all other forms of technological innovation that were coming before it, is mainly about displacement. This idea that jobs will be deleted and that there will no longer be work available for people in specific jobs or, or industries at parts of the economy. Now this report is one that we're launching today, I hope it's OK to share it with you. What we're trying to set out in re framing automation is a, a, a model for anticipating risks and impacts, which goes beyond just thinking that the main impact of automation is the displacement of work. So, obviously it's a benefit for organisations when you can fully substitute a role with technology because you can make a, a productivity saving. However, that's not the only, that's not guaranteed, and it's also not the only way in which automation can have impacts on, on work and work quality. So for instance, in this particular, in the report, we outline that there are also beyond displacement effects and also creation of jobs, impacts around sort of skill. So we've identified there's a difference between when a job has high discretion augmentation versus low discretion augmentation. So, high discretion augmentation would be where a job, a worker is still given control how they use technology, control over their work, and it potentially enables them to have, at their own discretion, a better method of, of conducting work. In contrast to that, what we have seen in practice is that there is an increase in low discretion augmentation. So, work is becoming more routinised, the kind of methods and processes are being set out and often encoded by, by technology for workers, which is reducing some scope for autonomy, but also has a potential impact on wages. I'm happy to talk about that more in the Q&A as is, as is desired or appropriate. But then equally in intensification of work, so this can either, this can come from the over estimation of the potential substitution or displacement effects of a technology, so over assuming that you'll be able to save on labour, and then people that get left behind having more work to do than was feasible. Or it could equally be through forms of algorithmic management that scheduled tasks, or as we were talking about on the panel just before this began, the brilliant and enabling technologies that allow us to do remote, remote working and communicate in these ways. It can also mean that there's a, a greater density of meetings within the day, so increasing task density. And these kind of things that highlight different types of risk that we need to manage. With that one, referring to the kind of tele presence. But then equally, the increased used of, increase use of predicted analytics means that there's a much greater use of machine learning in matching processes. So matching tries to reduce the frictions associated with time taken to process information, and allow for employers to gain efficiencies that might be in hiring, but also, this is happening in the workplace through, for instance, task allocation. Now those different types of automation impact that I've mentioned all have a specific route to generating value for a business, which, research suggests that organisations need to have a better regard of. When businesses have been deploying both data driven technologies and, more recently, machine learning technologies, there's evidence that suggests that they aren't capturing the value return as much as they could be, and in some cases, not at all. Because they haven't really thought about what the value proposition of that technology is. Now what this paper does is, it shows you what those potential value propositions can be between different models of automation if you were looking to generate revenue from what you were doing. But then also highlights how those particular approaches to value creation may have specific types of trade off for job quality. So by all means, do go and have a look. Next slide please. So what do we do about this? The fact that there are impacts not only on the availability of work and job displacement, but also potentially the quality of work and conditions of work and access to work? Well we've set out a methodology called a Good Work Algorithmic Impact Assessment, which admittedly doesn't sound like much fun, but I promise you, is more interesting than the name suggests. And the reason that it does have that title is also important to highlight. So internationally, in Europe and in America, there is a strong move towards undertaking forms of impact assessment, and that is being legally mandated. At the moment, in the UK, we obviously have data protection impact assessments, but they are, we were commissioned by the Information Commissioner's Office to do a project that looked at the role of data protection as a gateway to the preservation of other fundamental rights. So essentially, there's a recognition now that the ways in which AI is being used to replace human decision making is extending beyond the initial assumptions of what would be needed in terms of safeguards with regards to data protection. And we need new layers of support. Now we have this, I'll show you in a minute the resources that we've created on, that are available on our website to undertake this process. But it's worth flagging it, at the moment, this is advisory. So it's a tool kit for businesses that want to lead in responsible adoption of AI and think about and promote good work through the course of adoption. But there has been, we have been engaged in parliamentary conversations about how to embed this in future regulation, and from the international kind of landscape, it does look like something of this kind is, is probable in the future. It already is in other jurisdictions, and it may be in the context of the UK. So essentially, what the process is intended to is unlock and share information about the impacts of algorithmic systems on work and people, introduce new safeguards when AI and algorithmic systems are used at work, mitigate risks and maximise benefits as work transforms, and shift the focus towards pre-emptive action from damage control. So trying to estimate what some of the impacts of the deployment of these systems could be before you roll them out in the workplace. And also to develop forums for ongoing scrutiny, not necessarily just by Unions, but also by workers within context. Next slide please. And obviously the HR profession has a huge role to play in, in coordinating, navigating and potentially driving this kind of governance forward with other relevant partners within an organisation. So we have provided three resources under this, under this project, as I said, funded by the Information Commissioner's Office. And this includes a Good Work Charter Tool Kit. So this summarises the relevant regulations, codes and guidance that are potentially impacted by the deployment of algorithmic systems within the work place, [that should we have regard to]. It's obviously, a large part of the policy conversation has been on, let me just check that that's not me being over time, a large part of the policy conversation has been around the potential risk to equality, and now there's a growing infrastructure of governance mechanisms to manage kind of bias up the supply chain of the development of a product. That actually, when you look at a workplace and the development of these tools, it's also the case that broader regimes and established regimes about good work do need to be considered. So the Good Work Charter Tool Kit sets out what that kind of baseline is to remember, and [inaudible] but with, really the emphasis that we're placing in the recommended methodology is on just steering towards the kind of outcomes that you would prefer to achieve when, for your workforce. So if you want to drive better work, then those ten good work principles should be in there. We also provide a resource, Understanding AI at Work. So, I'm at, am I at time, Katie? Yes.
KJ: I think you're, sorry, I've had to come in on a different computer, because, as you might have seen, I turned purple. So, technology, so keep, keep going, it's fine.
AG: Oh, OK. So we've also provided a resource called Understand AI at Work, and this sets out a, basically kind of guidance for both employers and workers to understand how decisions made during the process of design and the development of a system, can shape impacts further down the line. So we need to get beyond the narrative of, oh it, you know, the, the, the machine did it, and recognise that there are accountable decisions about how a system is designed, how a system is developed, how a system is deployed, and that they are what drive potential breaches of the law, but also changes to good work quality later on. So have a look at that. And then the main meat in the pie so to speak, or the, the, the, the, the focus of the resources here is the Good Work Algorithmic Impact Assessment Guidance. And it is worth noting that this is, we see this as a kind of, a method for responsible innovation, but it also is, it sets out an approach to involve workers who we see as potential experts in understanding and forecasting impacts. Next slide please. So, at, for the process at a glance, we have an, an initial kind of assessment of how the, what the intentions are for using a system, and this is summarised a disclosure of position, which admittedly, is not a very enigmatic title for this, but is in line with international regulation and precedence. So here is where you'd record those key decisions about the design development and planned approach to deployment of a system. So who's going got be using it? Under what circumstances? What will be their mechanisms for recourse? Who will they have avenues [to address for]? But also, what's your intended approach to value creation and what does that mean for potential trade-offs and how we forecasted them? Now that statement is then brought in to the actual process of, it's firstly to diagnose what would be proportionate in terms of next steps and the amount of resourcing that you put into undertaking a good work algorithmic assessment. But it's also a kind of an information sheet through which the rest of the process can be undertaken through an understanding of what the system should do. And what that also does in that first stage is give you the apparatus to consider what you need to ask of providers of AI tools before you procure them. A lot of what you can do as an organisation who holds responsibility for and is accountable for the decisions that arise with these tools, lies with the extent to which you can access information about a system. So you need to be thinking about that in advance of procurement, and hopefully that section can help. And then the next phase is committing to the process, and this includes identifying what your organisation values are in advance, which is obviously something that David mentioned. If you have a strong priority for fairness or equality or potentially work or autonomy or dignity in work, or just specific aspects of terms and conditions, then it's worth identifying them going in and getting a, a, a, a sense of why you're undertaking this process. And then essentially, the four stages are to identify the individuals or groups who might be impacted, and we sent out a range of methodologies to do that, undertake a pre-emptive risk and impact assessment using a combination of methodologies, and centre in kind of reviewing that initial disclosure position, looking at good work principles, and also undertaking some scenario development. And then really importantly, taking action in response to the impact assessment. So this is why it moves slightly beyond assessing what's going on to then creating mitigation as appropriate. And they might be redesigning a system, again, it's important in the procurement of an AI to talk to the provider about your ability to potentially engage in design changes if they did come out of any impact assessment. For instance, turning off certain aspects of data gathering when employees are out of work, if they're using it on their own phones, or potentially allowing for preference elicitation. So rather than allowing workers with different regimes to come in at different times if it was scheduling shifts or something. And then in the fourth stage is to have an ongoing monitoring infrastructure to make sure that the use of the system is subject to ongoing review. Because, as is well recorded, the impacts of AI can be emergent, and they call also change as systems learn, learn and approach things in a new way. Next slide please. So I suppose I'll just note here that we are looking to pilot this method with businesses who really want to prioritise good work, and we have CIPD as a partner on a board of, wider board of regulators and the kind of regulators plus forum who are reviewing the way that this is working, and trying to make sure that future regulation is responsive to what it is that business can do it practice. So we look forward to working with CIPD and the rest of you in this space, and thank you for my time.
KJ: Thank you very much, Abigail, that's really, really fascinating. I'm just quickly going to hand over to Andy, because we've got quite a lot of questions, really, great, interesting questions and I really want to come to those. So Andy is going to give a bit of a flavour of what he's thinking about as a CPO when it comes to all this.
Andrew Stephenson: Thank you, Katie, and good afternoon everybody. So a little bit of context for you, Equiniti is one of the largest share registration organisations in the world, we have 6,500 people spread from the West Coast of the United States out to the East Coast of India. And clearly within the products that we sell to our customers, which does include more than 70% of the FTSE100, and a good 15% of the standard [inaudible] we are looking at how we use AI in our products. Today though, I'm going to talk about how we use AI in HR and where we're using it. And I think Katie highlighted that I was going to talk about the use and risks. Let me start with the biggest risk of all, any AI you use in HR is only as good as the date, data set it has got. So if you do not have comprehensive and really genuine information about your people, then you will simply get poor information out of your AI tools. For me, there are seven key areas that you can use AI in HR. Those being recruitment, performance reviews, onboarding, engagement, talent development, workforce planning and shared service. And if I touch on firstly the couple where we've been using this for quite some time and having good success. So in shared service, we have a chatbot, and the chatbot takes an awful lot of the first line traffic that used to go into our people's services teams. So those very simply requests around how much holiday I've got, where I find certain policies, where I find certain processes, the vast majority of that traffic is now taken by chatbot. And that enables the team to do far more value added work. The second area we've had huge benefit is in employee engagement. So we, like many organisations, used to do a traditional employee engagement survey, it was a big drain on HR resources. The team would go to a lot of effort to get people to fill it in, there would then be a, a herculin effort to analyse it and create action plans across the business, most of which went in a drawer and didn't go anywhere. Today we have a simple survey that's done every single month, and all of the action planning is done in AI. So the evidence goes in, the system will look at the data from individual teams, and it will recommend the actions and the learning resources that are best placed to drive engagement. And there's been some really fascinating things with this, it means firstly that more than 90% of our teams have an active action plan that they are working on, because they only need to talk about the process, not actually the planning itself. Fascinatingly, teams that are using the AI action plans have a five times better result than the teams that are attempting to do their own. So this is, this is a huge step forward, and we as an organisation have driven our employee Net Promotor Score up by nearly 60 points in a year, off the back of using this system. So AI is really helping managers make quick decisions and get on with their day job. And other areas that we're using it is, we are able, through having these very system, very system driven, very AI driven monthly surveys, we are now able to get into the world of predictive attrition. So because we actually know how people respond, we are able to look at what they are thinking about leaving before they've even realised themselves. So each of our managers has a simple indicator on their dashboard that shows the likelihood of people leaving within their team. It is phenomenally accurate, and it means that they can start to take corrective action before the event. It does not go down to individuals and identify them, it is only at team level. And I think with anything like that, people always say, you know, how, how accurate is it? The answer is, very. How do you get, how do people react to it? Well people know it's there, and the evidence I would give is our, our response rate on our monthly engagement survey is more than 90%. Very few companies do that. And they give us that response because they know that we are using it in a responsible manner. A couple of areas we're, we're, we're starting to develop and are coming on stream now [four as in AI] so all of our people upload their skills, this is not just formal learning, it's the skills they've got. And it means that, again, another long protracted HR process such as training needs analysis, which in many business I've been in is an exercise in people wishing things that they then don't get or didn't need. In our new world, the skills cloud works out what skills are common to all of our good performers, and therefore builds the training plan for people below. Again, to make people's data, wanting to give us the data, people can also put in the aspirational jobs that they want so that we can identify training for them as well. And we can identify roles. So lots of practical examples, and if, if the questions allow, we'll get into some others, but I'm conscious of leaving the time for questions. I will just come onto the risks, as I say, as a, as a company, we are building AI into our tools to sell to customers, but in HR, we are generally going to be buying services. So what should you look for? Providers, I would suggest, should have a responsible AI team if they are selling products out that uses a lot of AI, they should have a very clear ethical statement. And the biggest one of the lot for me is, they should have a very clear policy on what is the bias testing that they do? And certainly, as we start to think about how we would use this in areas like recruitment, we need to be very clear that the bias has been built out of the system. So AI is here, in our business it is certainly helping us do more value add stuff by taking some of the, the more menial, long winded tasks away from the teams, and letting us concentrate on stuff that's really important. And I'll hand back to Katie so we can get some questions in.
KJ: Brilliant, thank you so much, Andy, thank you everybody for your contributions, a really great discussion so far. And we have got quite a few questions, Andy, I'll just ask you a really simple one first, a couple of people have asked, which is, what engagement system do you use? So we'll just get that out the way before ...
AS: Well we, we currently use Peakon.
KJ: Brilliant, and another kind of practical question for you on how you actually do it, somebody's asking, how do you, how much is the AI built into the tools and how much have you had to develop yourself?
AS: So the, the, some of the examples I've used there is, is options that are available with the companies that, that provide the tools, the, we've had to help them as to how we want to specify it. And for things like the chatbots, we have to fill, fill the data into it. So this is AI that we're procuring off the shelf in these cases, but lots of people have this product and they're not using these features. So we are, we are very much at the front, forefront of using it.
KJ: Brilliant, thank you. So I'm going to come onto your questions now, if I can ask David and Abi to just come off mute so we don't have [that thing again]. Brilliant, so I've got a question somebody's asked about the, the rollout of AI is likely to cause some insecurities and anxieties across the workforce, particularly where people are worried about their roles or responsibilities potentially being replaced. How do we, as HR professionals, balance promoting and supporting usage while also, at the same time, managing those kind of anxieties? David, if I come to you on that first.
DDS: Yeah, it's a really good question. So at, at the start, I kind of made the comparison to the pandemic, and actually some of the skills that we needed there. We're experts in changing [connection lost] management, we're experts in understanding, to, to Andy's point, how to engage people, how to help them understand what the organisation's trying to do, and to enable them to orientate around that. This is a period where we're going to need those skills, so how can we be transparent about what we're doing, to reduce the amount of fear in the organisation? And how can we involve people so that, actually, they feel they're part of a journey rather than just a group of people that something's going to happen to? How can we help improve their skills and the quality of the roles that they're, and all of those are areas where, actually, it's a real chance for us to show our expertise in value, in helping along that journey by helping people not be scared of it, by helping people understand where the opportunity is, but also where the barriers are or where the red lines are that shouldn't be crossed. So I think it's a key role for the profession is to not just progress with this at pace without care, but help involve people in that journey along the way.
KJ: Brilliant, and then Andy, I'll put that to you as well as a CPO, but also kind of add to that, do you have a policy around, what would a good kind of AI policy look like for your people?
AS: We, we do have an AI policy, it is pretty light at the moment, and it is more about the fact that we will, we will use the tools responsibly, and it's very much in line with our data protection rules. But I think the simple bit is to be up front and honest with people. One, one of the biggest things I'm proud about in our business is, we are transparent. So we do tell people that we will be using AI in our tools, people recognise that, but we create very clear development pathways so that people can look at product categories that, that they grow themselves into. And because we're giving them a, a long time frame, it means they can plan their career around the way our products are developing. And as I say, if I bring it back to HR, you know, there, there's some menial tasks that people were doing in shared services, functions that they're no longer doing. And we haven't reduced the team, we've simply moved them onto, to, to more added value areas of our function. So I think as long as you're upfront and transparent with people. I mean everyone's get very excited in generative AI, because of ChatGPT's launch, but the reality is, if you talk to companies, I, I was talking to people from work that provide HCM systems, they've been working on this, on AI for, for many, many years now. Some of this will have a long lead time, as David says, we've always done change. And I think the best way is to just be upfront and completely transparent with people.
KJ: Brilliant, thank you. Abi, if I come to you, somebody's asked a question about what kind of things we recommend when looking at the bias in things like recruitment and promotion, and have referenced the progressive laws in [York]. So what kind of in sight can you give around that and whether you might see regulation coming down the line?
AG: Sure, so there, I, I do think there is forthcoming guidance, specifically looking at hiring. Obviously hiring was one of the use cases that drew most attention in the workplace context around a risk environment for AI early doors. And a couple of years ago, the Centre for Data Ethics and Innovation did a bias review that was thinking about that and had hiring as a kind of case study, as one of those landmark examples. There has been a lot of soft regulatory guidance in the UK context around how to undertake and do bias audits well. We ourselves did a review of, and there are now also an emerging ecosystem of companies that provide that as a kind of service. We did undertake a review of, the, the report is misleadingly titled, AI in Hiring, rather than forms of bias assessment. But in our AI and Hiring report, we evaluated a range of sort of tools that are available on the market for fairness auditing. And these are kind of principally technical assessments. However, the choices that are made, and you'll see this in the literature on bias auditing, there are inevitable tradeoffs to be made off. I think when you turn a set of questions into a set of mathematical principles about what, you know, who wins and who loses and for which reaches and what looks like fairness? There's always a degree of a normative value judgement to be made in that. And it's also, to date, proven quite difficult to demonstrate that you can, are being compliant with the Equality Act in relation to those laws. So that's not to sort of scare you, scare people away from doing it, it's the fact that basically the emerging industry around this recognises that it is complicated to really meaningfully guarantee that you've complied with the Equality Act. It is inevitable that there will be some types of trade off when you're making zero and one binary decisions, this does represent a new landscape relative to previous kind of approaches with, with human decision making. Which does create opportunities to promote equality if we do it right. So the kind of, the [New York] Act is, puts a strong emphasis on, on transparency and disclosure about how those decisions were reached. And that also allows people to engage in meaningful forms of redress. So at the moment, in the UK, the way that you would obviously make a claim of discrimination would be an individual claiming that they're being discriminated against. If you can't understand how a system was designed, how the mitigations were made, what the trade-offs were that were reached, then you wouldn't be able to know whether or not that seemed reasonable or not as that process. So obviously the government here has already introduced transparency and, and, explain ability as key principles in the AI white paper, but quite what that looks like in terms of a refined regulatory landscape going forward is, remains to be seen. But yes, I would say it's likely that we will have more, more, more, more, more direct action in that space later.
KJ: Yeah, thank you. And staying on the current topic of bias, yeah, David, I was going to come to you. I'm going to add into that an extra question, building on what Abi's said, and, but somebody's asked, how do we as professionals, people professionals perhaps, can we learn to evaluate bias? Do we know what we're looking for? How we do kind of know what the risks are when we're going into using those particularly kind of open AI tools?
DDS: Yeah, so, I, I, I think there's two things here. So I'll just build on what, what, what Abi said, one of which is, we shouldn't be scared of using tools that may improve actually what are quite unfair outcomes currently. So the challenge that we, the challenge that we have and that we'd all acknowledge is that the systems produced by humans, the outcomes produced by humans have quite often given, fundamentally, unfair outcomes over time. So the starting place shouldn't be an arrogant one which is, well we do it brilliantly, but there's a problem when it's automated. Part of the challenge is that, we've just got a challenge full stop, can we use technology to drive better results in some of those places? So I would argue, if you're being discriminated against now by the human systems, even an imperfect reduction in bias would be better than the challenge that, that this may have some bias in it. So that would be the first kind of space I'd operate in. Secondly, we, equally, and this may seem counter intuitive to what, you know, or, or contrary to what I've just said, we need to work out where we need to keep people in the loops. What you can't do is systemise bias and then leave it and have a, a kind of level of deniability. So the comparison I would use is satellite navigation systems in your car, utterly makes sense to follow them, normally going to give [connection lost] [you] a better journey, but occasionally you'll see someone's driven into a river because they're not paying attention, because they just assume technology will solve it for them. We are ultimately accountability, ultimately accountable for decisions that have profound impacts on people's lives and careers. We need to take that seriously. And as Andy says, we need to start looking at the technology, not just that can get us a quick result, but being really confident in the results coming out of that and how they're formed. And that's trickier now than it's ever been.
KJ: Yeah, thank you. Andy, one for you here, somebody's asking, because [inaudible] gave a really great example of how using it in engagement, how do you balance this kind of, well they've said, robotic approach with kind of, that could come with automated messages with a more human centric approach? So how do you kind of judge, I guess, the right balance of human versus machine, or human and machine?
AS: I, I think the interesting it is, because the system is so efficient at what it does, that actually, the human interaction in, in that example has, has gone up. So I, I talked about we get, we get 90% response rate, in a typical month from our 6,000 people we'll get somewhere like 30,000 comments. Now in the past, they'd have gone into, you know, a, into the background and teams of people in HR or some external partner would be gathering, you know, gathering through them. What happens is, the AI in our tool, in the tool that we use, actually surfaces to managers the most important and the most relevant to them, based on the priorities they've set. And they, at the touch of a button, can respond. So typically, half of our comments are actually responded to in real time, and in many cases, that's the manager reaching out and saying anonymously, "I don't know who you are, but I'd love to talk about this." So actually, the AI is far from robotic, what it's doing is creating a phenomenal level of two way engagement across the organisation that just wasn't there before. So I, I go back to, these tools can be used off the shelf to make your life easier, or they can be used to really, really drive value. And linking back to, to David's point on recruitment and using my engagement example to bring it to life, you know, biases have always existed. I, I'm increasingly seeing areas where the AI does things with less bias, and that's why I say, when you procure a system, you do need to look very closely at the bias testing the provider has used. But, but once that's very good, chances are, the outcomes are better. And I go back in our example, people who are using the AI based engagement tools are getting vastly better results than the people who are not. And if I make a real example, in one of our sites, one, one of the, the engagement feedbacks was that we should change the types of toilets to have a more religiously based washing set of facilities. That would have never been surfaced in the old fashioned paper bits, and managers would have never prioritised it. But the AI showed it was jumping off the page, and we've done something about it. So these tools used well can be phenomenally valuable.
KJ: Yeah, thank you. We had a couple of questions specifically around the impact on, I guess the people profession. So David, if I come to you on that first, what do you think this will do to people coming into the profession? What impact did it have on career paths within profession? Does, somebody's asked, does everybody in HR need to become an expert in AI?
DDS: So I, I wouldn't say an expert, but I, I think you can't not understand it and how it will be utilised. So I think, if I think, if I consider myself old, the next generation coming through, when I'm talking to students now, it has to be part of their toolkit. They have to understand their potential that sits within it. I think to Andy's point around where it can be utilised, you're far less likely to, if you fast forward five, ten years' time, you won't be part of a shared service centre probably in the way that we have now, all the queries coming in to that service centre, the more basic ones will be automated. And organisations have been doing this for a number of years, this isn't new. More basic ones will be automated, and that will free up time for the more complex queries. So I think leaning into actually where we can add strategic value, understanding how different tools can be utilised and put together, so understanding how to manage a text stack. Some of the issues that we've been talking today in terms of bias and things like that, will come to the fore. But we are of almost of all of the professions, one of them that is most dependent on people and the interactions between people, and understanding how groups work and how they excel. That I think will be more important than ever. So someone talked earlier, or asked a question about change, again, that will be a skill set that will come to the fore. So planning, change management, complex pieces that require human [in the loop], we can utterly excel in those spaces and need to be thinking forward about how we can keep adding value there. But as a profession, we are necessary because of the bits that can't be solved by machines, and because some people won't want things solved by machines. And that's to our advantage in a way that it may not be for some of the other professions.
KJ: Brilliant, and before I come to Abi with a different question, just, Andy, on that topic, I mean you mentioned you kind of upskilled people and moved them up the, I guess the kind of the value chain. What do you think it does for kind of entry level careers? [We're you] thinking differently about the, I guess the skill sets that you're looking for in the future and how you're developing people?
AS: This is a glorious opportunity for HR, I mean ever since I got into HR 20 odd years ago, and I think before that, people have been saying, we need to be more and more strategic. And the reality is, one of the reasons HR departments often aren't strategic is they are in, they are completely overwhelmed by very mundane, very transactional pieces of activity. You know, you can very quickly see a world where those transactional activities have gone to a world of AI. And that means people professionals can really add value at all levels of the organisation. So we will be looking for people who can absolutely add a value and add strategic input to, to their divisions and their organisations, even at the most junior levels. And, and training people in that world rather than processing elements of the HCM system will be a huge game changer for the, the, the way other parts of a business sees our profession. And, and I only see that as good things.
KJ: Thank you. Moving on, Abi, one for, I'll put to you first, somebody's asked about kind of where liability sits with the risk attached to these tools. So is the, is the organisation? Is it the system? Is it the provider of the system? How, is, is that even clear at the moment?
AG: Yeah, definitely not an easy, an easy last question to sort of round up on. What I'll say is that there's, there's, and this is also shaped by the fact that it will depend on the nature of the system and how it's deployed and the, the nature of the agreements. But the focus in, in different jurisdictions, so for instance, in Europe, is very much on the design of systems. But as is increasingly the case, particularly the case with generative AI, a lot of the kind of risks and impacts and social consequences are defined really strongly in the deployment context. That doesn't mean that our kind of previous ways of thinking about things aren't important, but it really does place emphasis on thinking about intersections with different legal, with different legal regimes, but also outcomes in a positive social sense, in a context of use. Which does mean that firms are going to have to be thinking about this, regardless of how robust their procurement processes are for risk management of the system before they get it. And I just take the change to, to second the points there made by Andrew, and also pick up on David's point, that if we do get that right, we can advance equality with the system, which is really promising.
KJ: Brilliant. And, David, kind of what risks would you, I guess particularly for [inaudible] we ran a session for HR leaders the other day, and some of the stuff that was coming up was, you know, people putting internal company documents into ChatGPT, therefore everything's just kind of out on the internet. There being, a lot of people kind of raised the issue that this is going to end up a bit like regulations that came out during Covid, where if you're a global organisation, is this going to be a whole patchwork of stuff to deal with? So is there any way to kind of get ahead of that stuff?
DDS: Is there a way to get ahead of it? Yeah, it's to not storm in. And I mean that because I'm, I'm talking to people that still haven't used this technology, so they don't understand the implications of it. I, I think the best thing a profession do, can do is be actively curious rather than scared. And then as it works its way through that, think about the risks that are there. So there's obviously long term risk to equality, quality of outcomes, there are long terms risks in terms of career development, because if people aren't get that early stage career exposure they might have had, what are the implications for that. Right through to changing some of the elements that we've picked out as well as wellbeing and isolation. So there's a range of things. But we will only be best placed to understand them if we're curious now. So we, we are asking people to think about this technology and start using it and experimenting with it, not because it doesn't have risk, but because you won't understand that risk unless you have an understanding of it, how it fits together and the potential implications of it. So to be honest, to guard against risk, to get ahead of that risk, the only way you're going to do it is to understand what you're working with and the potential implications of it.
KJ: And to build on that, and somebody's asking a question particularly around training courses, but if I make that a bit wider, I mean, Andy, how have you educated yourself and how are you educating your team?
AS: I think it's a little bit, David's bit of being not afraid to ask our partners, be not afraid to, to fire the stuff out and, and play with it, and see what it can do. Yes, there's rules, yes, there's policies, no, you shouldn't be uploading company documents into ChatGPT, but certainly people should be trialling it, and certainly our internal comms teams have used ChatGPT to, to do the drafts of things. They, they are certainly not in a case you could then use them, they have to be edited. But they actually take an awful lot of the workload out from a starting point of view, so you can do that. And as I say, with our partners, we are very much a case of talking to them and going, what is coming? Can we see it first and evaluate whether it's of use for us? But we actively encourage certainly everyone in the function, and I think that goes for the wider business to, to look at this and to get involved and, and to try and come up with how they think they can use it. And then we can test it sensibly and responsibly.
KJ: Yeah, we've got five minutes, so [I'm guessing] I'll ask one more question, I'm going to put it to Abi I think, and then I'm going to ask you all for a key take away, so just think of that, think of that now. Abi, it's quite, I'm just going to summarise this question, hopefully I will do it, do it justice because it's quite long. So we've talked a lot about kind of generative AI in this conversation and how people are kind of taking that into their own hands, how it making, kind of making stuff easier, but also the risks that come with that. How do we compare that with the kind of more full scale, automated like machines doing physical labour? Like is it the same kind of process of evaluation that you set out in the, in, in your presentation? Or do we think about it a little bit differently?
AG: Sure, well I, I suppose, without overly, without, trying not to repeat my, something that I've already mentioned, but I suppose what is different about auditing of large language models relative to other types of technologies, is that it's increasingly difficult to assess the kinds of impacts that they'll have independent of the context of their use. Which means that, again, as we talked about this kind of distribution of accountabilities between the design and the developer, that's not to say that there aren't responsibilities for the designer. But as they become integrated within work place systems and used in different types of application, then the context of use becomes an increasingly important arena to think about risk. Which means that these architectures for governance, like impact assessments, will need to be based within organisations. And again, that is a potential role for the future of the people profession, until such time that we've already worked all that out, which will probably take maybe a five to ten year range as we get used to these a bit, we get used to these tools. It's also the case that concepts like fairness and transparency are limited with large language models in a way that they weren't with the other systems. Because some of the aspects of their design and predictability are different. And so they, so it again takes it back to the earlier point about predicting them. And ultimately, we're probably at a point in time now where there's a bit of, what might be described as an epistemic gap. We don't yet know what the potential consequence of these systems will be, which is why we need a dynamic system for governance that we can sort of return to later.
KJ: Brilliant, thank you. So we are out of time there, really fascinating and wide ranging conversation. I'm just going to ask everybody for kind of a key, a key tip, or one thing you want people to remember from this conversation, I'll come to you first David.
DDS: Yeah, I've been talking about this for decades, I've got stacks of [connection lost], but I'll go for, don't let the risk blind you to the opportunity, but also, don't let the opportunity blind you to the risks. We, we've got to hold those two things, and at the same time, we can't not move forward with it. And we have to think long and, short term whilst we're doing that. But the final point from me would obviously be, the CIPD will be with you every step of the way. So as a profession, we've got to respond to it, but you are part of that and play an active part. So get talking to each other, get [supporting] to each other, get into the communities and let's make sure that actually, we're making a cause as collectively as we can do, because it's a really important time of change.
KJ: Brilliant, Andy?
AS: AI in HR is only as good as that data that can go into the system, and if you want your people to trust you with their data, then you've got to trust them by being upfront and telling them what you're doing with these tools and how you intend to utilise them in your business.
KJ: Brilliant, and last but not least, Abi?
AG: I don't think I can beat what's been said in summarising things very quickly, I need ChatGPT to make me concise. I can't do it [inaudible] too much, I'm just going to bow out. I --
KJ: That's fine.
AG: Yeah.
KJ: Brilliant, thank you all so much, brilliant insight. I see quite a few people asking Andy to go back over some of the practical points, this has been recorded, it will be available on the, on the CIPD website, so you'll be able to watch there and take copious notes. But thank you so much Abi, David and Andy. A, a reminder there of some of the CIPD benefits, as David says, please do remember, we are there with you every step of the way. And the recording and the slides will be available later on, but we've also, we've got plenty of other resources for you to access on this really, really important topic. So I hope that was useful, thank you so much for joining and thank you for your really, really thoughtful questions and we'll see you later.
Tackling barriers to work today whilst creating inclusive workplaces of tomorrow.
Discover our practice guidance and recommendations to tackle bullying and harassment in the workplace.
What are the barriers to the adoption of generative AI tools at work and how can they be overcome?
Listen to episodes from our CIPD Ireland Podcast Series on a range of topical workplace, HR and L&D issues
Listen nowHow artificial intelligence (AI), robots and automation are shaping the world of work, the ethical considerations and the role of people professionals.
Watch our webinar for an outline of the changes, guidance and advice from our panel of experts
Explore how you can normalise conversations about menstruation and menstrual health in the workplace to better support women at work
Explore how enabling employee voice can help create a safer and more inclusive working environment
Explore how to create a menopause friendly work environment and empower employees to continue to work and thrive whilst experiencing menopause transition