Developing a contingency plan for sudden exceptional events
Use this practical tool to create an overview of key considerations to look at prior to determining your organisational response to a sudden exceptional event
This case study details steps taken by a not-for-profit membership organisation to navigate towards a collective position on using generative AI
![]() |
Supported by the Innovate UK BridgeAI programme, this case study took place as part of an action research project carried out by CIPD’s research partner, the Institute for the Future of Work (IFOW). The project sought to foster a shared understanding on how to use AI effectively and responsibly. This case study describes the steps taken by a not-for-profit membership organisation to navigate towards a shared organisational position on using generative AI. |
This case study focuses on a medium-sized not-for-profit membership organisation, referred to here as MemberOrg. To support this research, MemberOrg formed a cross-functional working group of five people, mostly senior leaders with remits covering HR, legal, governance, IT and data.
MemberOrg is a values-driven organisation committed to transparency and co-creation. Since 2023, this ethos has been formalised through an employee consultation forum, ensuring employees from every department had a direct voice in senior leadership and board decisions. As MemberOrg’s HR director explained: ‘This [enables] … regular refinement on activity and objectives to ensure they are agile and manageable across teams.’ This collaborative culture has fostered high levels of employee autonomy and long-term retention. Employees within and outside the working group reported being encouraged to develop and take on different roles in MemberOrg.
This environment of high agency has sparked a ‘self-led’ approach to AI. A survey designed by IFOW revealed that two-thirds of the workforce were using general-purpose generative AI tools in their daily work, primarily for retrieving information. More advanced users were independently building custom agents to automate workflows, such as fee estimation and automated email redirection.
Despite this widespread grassroots uptake, MemberOrg lacked a formal AI governance framework. No tools had been officially integrated into organisational processes, and no paid licences had been provided. Other than a set of AI use guidelines published in 2023, there was no designated oversight to guide safe or strategic usage.
When the research began, MemberOrg’s AI strategy relied solely on a set of AI use guidelines, which functioned as a compliance checklist. They placed the full burden of responsibility on the individual to self-assess for accuracy, confidentiality and intellectual property risks.
This top-down approach created three hurdles for the organisation:
To address these challenges, the working group and IFOW hosted a half-day workshop in September 2025. Attended by half of the workforce – spanning all teams and seniority levels – the workshop aimed to reignite dialogue on generative AI that had stalled since an initial workshop and survey in 2024. The objectives of the workshop were three-fold:
The workshop surfaced a clear desire to automate mundane and repetitive tasks. However, employees were discerning about where AI added true value. Some uses were deemed too low value to justify the environmental cost of the energy required, while others were deemed too high risk due to inherent AI bias and inaccuracies.
Employees also highlighted several significant people impacts of using AI:
Consequently, employees asked for ‘safe spaces’ to experiment as well as practical, organisation-wide guidance rather than simply being told to ‘go and use it’. An anonymous poll at the workshop revealed only 47% of participants referred to the AI use guidelines, confirming they weren’t fit for purpose. Employees criticised the guidelines as too abstract and subjective, noting they also lacked consideration for environmental sustainability – one of MemberOrg’s core values.
To move forward, the working group opted for a series of immediate, practical steps:
By moving from a reactive and top-down mandate – to contain unregulated self-led usage – to a co-creative process, MemberOrg has established a more stable middle ground. This approach allowed the organisation to surface opportunities and risks directly from employees. It clarified broad restrictions with a structured task-by-task assessment of a formally vetted tool.
The collaborative nature of this research has strengthened MemberOrg’s existing culture of co-creation and shared responsibility. By implementing practical, supportive steps, the organisation has generated significant internal momentum, addressing compliance anxiety felt by some employees and replacing it with a sense of collective ownership.
On culture alignment, the HR director noted: ‘We know autonomy is important … skill variety is important … people want to hear the authentic voice, not the voice of AI, all of these things have come out in a way where perhaps we hadn't consciously thought about it’. Reflecting on the momentum built, a working group member added: ‘I can see the voice of different people … coming through and the fact that we've co-created this … we're getting the momentum … we have got that enthusiasm within the team. People are keen to see what's going to happen next.’
MemberOrg has moved toward a problem-solving mindset by identifying where AI adds value rather than using ‘AI for AI’s sake’, as one working group member had quipped. To ensure efficiency gains are mutually beneficial, the organisation would, in future, aim to:
As MemberOrg is in the early stages of integrating AI into its employees’ daily work, the working group remained vigilant about the human cost of increased productivity. A key concern was that increased working speed with AI might lead to unsustainable growing expectations and eventual exhaustion.
One working group member cautioned about the risks of burnout that could arise from ‘having a tool that … may lead to … compressing time frames in your mind of how long a job or task might take … previously I might push a task into the next quarter. But now schedule it for next week … I’ve now also got an expectation on myself about how much more I can get done, and potentially this will detrimentally alter my view on how (fast) others, who have access to the same tools, can get (work) done’.
Use this practical tool to create an overview of key considerations to look at prior to determining your organisational response to a sudden exceptional event
Discover how the CIPD and the Institute for the Future of Work (IFOW) found insights from eight diverse case studies around the friction between AI and workforce stability. Learn why strategic pauses are necessary and why safeguarding your organisation’s future expertise is essential
This case study looks at the rollout of an AI writing assistant at a large organisation. It highlights how failing to involve employees and preserve foundational skills creates a trust gap that undermines investment
This case study looks at how a strategic innovation company navigated the sector’s structural challenges while introducing an AI tool for maintenance planning
Leading Voices is a series of short audio essays in which senior people professionals reflect on how they have tackled some of the profession's most pressing challenges.
Leading Voices is a series of short audio essays in which senior people professionals reflect on how they have tackled some of the profession's most pressing challenges.