At the end of last year, O'Reilly Media published a comprehensive report on the adoption and impact of generative AI within enterprises.
The headline of the report is that we’ve never seen a technology adopted in enterprise as fast as generative AI. As of November 2023, two-thirds (67%) of survey respondents reported that their companies are using generative AI.
However, the vast majority of AI adopters in enterprise are still in the early stages; they’re experimenting at the edges, rather than making larger-scale, strategic decisions on how to leverage AI to accelerate our progress towards org goals and visions.
The single biggest hurdle to AI adoption in large corporates is a lack of appropriate use cases.
Why is this? One of the key insights of the O’Reilly report is the light that it shed on barriers to large scale AI adoption in enterprises.
The report revealed that while much-discussed challenges linked to legal issues and a lack of AI policies have slowed down the adoption of AI in enterprises, the single biggest hurdle to AI adoption is a lack of appropriate use cases.
How Do Use Cases Help Drive AI Adoption?
Use cases help to enable adoption of AI by providing a clear, practical framework for testing, implementing and benefiting from generative AI technologies. They:
Focus Efforts: By identifying specific and practical applications, organisations can concentrate their resources on areas with the highest potential for impact, avoiding wasteful expenditure on low-value or infeasible initiatives.
Get Stakeholder Buy-in: Concrete use cases make it easier to demonstrate the value of AI initiatives to stakeholders, securing necessary support and funding.
Enable A/B Testing: A well-defined use case provides a roadmap for how to test the impact and value of AI interventions for the business.
Mitigate Risks: Testing specific applications of AI to measure their impact on business outcomes allows organisations to more effectively anticipate and mitigate both the benefits and risks associated with AI deployment.
Why Are AI Use Cases Like Hens’ Teeth?
As you can imagine, identifying appropriate use cases for AI is particularly challenging for several reasons:
Complexity and Novelty: Generative AI technologies, while powerful, are relatively new and complex. Organisations often struggle to understand the capabilities and limitations of AI, making it difficult to identify areas where it can add tangible value without existential risk.
Alignment with Business Objectives: A useful use case must align with an organisation's strategic goals and address specific challenges or opportunities. Finding applications that both leverage the technology's strengths and significantly impact the business requires a deep understanding of both the technology and the business context. This alignment is crucial for securing the necessary investment and support for AI initiatives, and makes the role of the L&D team leader especially critical.
Risk Management: Generative AI applications can introduce a range of risks related to data privacy, security, bias, and ethical considerations. Identifying use cases that balance the potential benefits of AI with these risks requires careful consideration and expert guidance. Organisations must ensure that their use of AI is responsible and compliant with regulatory standards, adding another layer of complexity to the identification of suitable applications.
Technical and Operational Feasibility: Even if a use case aligns with business objectives and manages risks appropriately, it must also be technically and operationally feasible. This includes having or being able to develop the necessary data infrastructure, possessing the right skill sets among the team, and being able to integrate AI solutions into existing workflows and systems.
AI Use Cases for L&D Teams
Over the last few months, I have collected as many use cases of AI in corporate L&D as possible. Here are the three most common to get you started:
1. AI-Assisted Content Creation
Description: By far the most common use case of generative AI in the corporate L&D setting is the use of AI-powered content creation tools. Gen-AI tools like Grammarly, Synthesia and Gamma are being used by L&D teams to assist in creating high-quality content more quickly and consistently.
Hypothesis: The hypothesis of those who have invested in this sort of technology is that it can:
Save time and accelerate the training design & delivery process
Increase the consistency of learning content
Increase the quality of learning content & learner experience
A/B Testing Approach: In order to test the impact of AI-assisted content creation tools, you can set up a rapid A/B test as follows:
Control Group (A): Instructional designers rely solely on manual content creation methods.
Experimental Group (B): Instructional designers AI tools for content creation tasks, such as drafting scripts or generating course outlines.
Metrics to Measure: a) Time saved in content creation, b) Accuracy and coherence of AI-generated content, c) Ease of integration of AI-generated content into course materials, and d) Perceived quality of AI-assisted content by learners.
Evaluation and Iteration: Assessing the results of the A/B test will determine if and by how much AI-assisted content creation improves efficiency and content quality in instructional design.
2. Automated Content Summaries
Description: Many L&D teams use AI-driven tools like SMMRY and QuillBot to help them to quickly summarise complex and/or extensive materials, reducing the time spent on reading and extracting key points for course development.
Hypothesis: The hypothesis of those who have invested in this sort of technology is that it can:
Save time and accelerate the training design & delivery process
Increase the accuracy of content summarisation
Increase the effectiveness of summarised content in facilitating both L&D team understanding and employee learning
A/B Testing Approach: In order to test the impact of AI-assisted content creation tools, you can set up a rapid A/B test as follows:
Control Group (A): Instructional designers manually extract key points from materials to create course content.
Experimental Group (B): Use AI-generated summaries as a basis for course development.
Metrics to Measure: a) Time saved in content summarisation, b) Accuracy of AI-generated summaries compared to manual extraction, c) Clarity and coherence of summarised content, and d) Effectiveness of summarised content in facilitating learning.
Evaluation and Iteration: Analysing the results of the A/B test will determine if and by how much AI-generated summaries accelerate course development without compromising content quality.
3. AI-Generated Analysis
Description: L&D teams are using AI tools like Typeform, Google Forms & SurveyMonkey Genius to assist in creating, gathering and analysing learner data.
Hypothesis: The hypothesis of those who have invested in this sort of technology is that it can:
Save time on process of writing, circulating and analysing data
Improve completion rates of learner surveys
Increase the depth and quality of insights gathered for learning design decisions
Increase learner satisfaction with the survey experience
A/B Testing Approach: In order to test the impact of AI-assisted content creation tools, you can set up a rapid A/B test as follows:
Control Group (A): Use traditional survey creation methods involving manual design and distribution.
Experimental Group (B): Implement AI-generated learner surveys tailored to individual learner characteristics.
Metrics to Measure: a) Completion rates of learner surveys, b) Depth and quality of insights gathered, c) Relevance of survey questions to course design, and d) Learner satisfaction with survey experience.
Evaluation and Iteration: Evaluation of the A/B test results will determine if and by how much AI-generated surveys improve speed and quality of needs analysis for course design.
Conclusion
The rapid adoption of generative AI in enterprises highlights its potential to revolutionise how businesses operate, innovate, and compete. But despite the enthusiasm, the journey towards fully leveraging AI's capabilities is still in its infancy for many organisations.
Early experiments and testing demonstrate AI's potential to streamline operations and improve outcomes, but a significant up-turn exploration and sharing of use cases within corporate L&D will be critical if and how AI might impact the speed, quality and cost of our processes.
As companies navigate the complexities of AI adoption, focusing on clear, actionable use cases will be crucial for unlocking the transformative power of AI in the enterprise in general and L&D teams in particular.
Happy experimenting!
Phil 👋
PS: If you want to learn more about how AI is impacting how we design, deliver and evaluate learning design, check out my AI Learning Design Bootcamp and my monthly Learning Futures newsletter.