Shaping artificial intelligence for your future business needs
Hetan Shah of the British Academy discusses the opportunities and challenges that leaders should consider
Hetan Shah of the British Academy discusses the opportunities and challenges that leaders should consider
Ironically, the impact on jobs – although widely uncertain – is the part that people professionals are probably already well placed to handle. They will be all too familiar with changes to staffing requirements caused by global shocks, new products and opportunities, or the behaviour of competitors. They will therefore find they can deal with the most talked about bit of AI – the robo-apocalypse on jobs – in their stride.
There are, however, a host of other, less well-discussed, challenges that business leaders will need to think about in order to harness the potential that artificial intelligence has to make organisations more efficient and more effective.
Artificial intelligence (AI) is an umbrella term for a suite of technologies that performs tasks usually associated with human intelligence. The technology responsible for driving most current and recent advances within the field of AI is ‘machine learning’, which enables computer systems to perform specific tasks intelligently, by learning from data rather than pre-programmed rules. Machine learning is used across many areas of everyday life, such as image recognition systems (like those used to tag photos on social media), in voice recognition systems (like those used by virtual personal assistants), and in recommender systems, such as those used by online retailers.
Sensational films might make us anxious that AI will develop its own consciousness, but this remains the stuff of science fiction (although recently a Google engineer did worry their AI had become sentient). In coming years, we are likely to see AI advancements in processing human language, which could lead to improved chatbots and virtual assistants. It will also underpin other technologies, such as autonomous vehicles and the ‘metaverse’ – an emerging term for new collective cyber and virtual reality spaces.
One key area is around skills. A few high-end technology firms will need many AI programmers, but most organisations will be using AI systems developed elsewhere. Staff will not necessarily need to be able to program an algorithm, but they will need to be intelligent customers of AI technologies. As AI is largely powered by data, we will need to see a strengthening of data skills across the workforce. The education system has not yet caught up with the data and digital skills required, so it is likely that employers will need to remediate this directly, cascading basic-level data-handling, statistics and digital training across the workforce. Those organisations that are not already using their internal and customer data effectively will also need to invest in strengthening their data systems. Data ethics and governance skills will also be important: AI systems can be ‘black boxes’, and if they are making important decisions, staff will need to create and handle accountability and appeal procedures.
Skills will also be an issue at board level: do organisations have forward-looking board members that understand the possibilities and risks of new technologies? Such skills are typically found on the boards of technology companies, but are often in short supply in organisations that don’t think of themselves as technology organisations. Businesses will need to specifically recruit for these skills in future board members, and may need to look to a more diverse pool of candidates.
Al may automate some jobs entirely out of existence but, in many cases, AI decision-making tools work best when collaborating with, and augmenting, human capabilities. This means that workers will need to understand the strengths and limits of any AI system they are working with. The biggest danger will come to organisations that rely on systems they do not understand without enough human oversight.
A key issue for data-driven AI is that the algorithms ‘learn’ from the data they are fed (or ‘trained’ on). This makes them very powerful but also open to a variety of errors or biases – for example, algorithms in the US used to predict the likelihood of reoffending have been accused of racial bias in their recommendations.
Algorithms may become brilliant, for example as diagnostic companions in the NHS which can spot the early signs of a disease better than a doctor. However, the doctor has a training and professional lens which means they might see a wider context and a higher-level perspective – for example, in the case of prescribing antibiotics versus the risk of building anti-microbial resistance. So employees will need a new set of skills to both complement the insights from AI, but also to know their limitations and when the AI recommendation ought to be overridden.
AI will change the tools of people managers. AI-driven recruitment packages are already on the market. Some salespeople will sell products which can make ‘better’ and more efficient recruitment decisions, but the onus is on companies to get under the skin of what this really means. Many AI recruitment systems are trained on the data of existing staffing, and there is a real danger that they will therefore replicate biases in existing hiring strategies, but just more efficiently. For example, Amazon was criticised some years ago for developing a system that allegedly biased against female candidates.
The question to ask of any AI recruitment system is: how does this help overcome some of the biases the organisation had in its traditional recruitment processes and help them get candidates they might have overlooked in the past? New ‘emotion recognition’ technologies promise employers the opportunity to assess prospective employees for certain characteristics, such as ‘trustworthiness’, during virtual job interviews. Such tools not only have spurious claims to accuracy but are broadly met by fear and concern by members of the public, and may deter potential employees.
Another way AI is changing the relationship with the workforce is through the increased ability to engage in monitoring and surveillance. This ranges from being able to track keystrokes, to monitoring the speed of task completion, and to cameras which alert managers if employees are not working when they should be. Although workplace monitoring and surveillance is not new, AI technologies have supercharged the ability to track workers, especially with increased remote or hybrid working. This has led to the TUC raising concerns about the increased use of such technology and its oversight.
We have yet to come to a societal consensus on what is an acceptable level of surveillance. Where workers are a core asset, business leaders need to be careful not to undermine productivity through breaching trust. It is important to be transparent about how organisations are using workers’ data, and to provide the ability to appeal decisions which may be made by algorithms. It is clear that the use of AI to monitor work and the associated use of employee data will be new frontiers in both workplace negotiations and regulation.
The impact of AI at an individual workplace level raises new challenges, but is manageable at an organisational level. In some ways, it is much harder to make sense of the impact at a wider societal level. The overall impact on jobs will matter for society. History suggests (from, for example, the Industrial Revolution), that jobs lost by new technology will ultimately be replaced by new roles, with society gaining from economic productivity.
However, two key questions are: how long will it take for the disruption to pass, and how widely will productivity gains from AI be shared, or will they just accrue to a small group at the top of society? Employers and policy-makers can play a role in helping disrupted workers through an improved welfare system and retraining opportunities, and in ensuring the economic jam from AI is widely spread.
Ultimately, AI is a general-purpose technology and, as a society, we can shape that purpose through the choices that individuals, firms and policy-makers make. We can imagine an AI dystopia where there are fewer jobs, and those that do exist are highly monitored and very intensive, with algorithms that make arbitrary decisions which either can’t be appealed, or which reinforce existing discrimination in society. Nobody would choose that world, but we could end up there by accident.
My recommendation is that business leaders should:
Business leaders have an important role to play in using this technology to help create productivity gains shared across society. They can create ‘good work’ by freeing up workers to focus on the interesting and creative parts of their roles, and enable organisations to better access previously overlooked diverse pools of talent. Let’s use AI to create happier, more diverse and productive workplaces.
Hetan is Visiting Professor at the Policy Institute, Kings College London, and Vice Chair of the Ada Lovelace Institute, which works to ensure data and AI work for people and society. Hetan is on the steering group of the Pissarides Review of the Future of Work and Wellbeing led by Nobel prize-winning economist Professor Sir Christopher Pissarides. Hetan is on Twitter @HetanShah
Browse our A–Z catalogue of information, guidance and resources covering all aspects of people practice.
Discover our practice guidance and recommendations to tackle bullying and harassment in the workplace.
Listen to episodes from our CIPD Ireland Podcast Series on a range of topical workplace, HR and L&D issues
Listen nowWhat are the barriers to the adoption of generative AI tools at work and how can they be overcome?
How artificial intelligence (AI), robots and automation are shaping the world of work, the ethical considerations and the role of people professionals.
How can people teams balance line managers’ need for operational people management support while growing their team’s strategic influence through the HRBP role?
We examine and outline recent research investigating the impact of generative AI tools on the HR profession
We look at how investing in digital technologies, HR skills and culture drive success in restructured people functions
We look at the main focus areas and share practical examples from organisations who are optimising their HR operating model