Skip to Main Content

Navigating An AI Software Pilot Program

by
Mindy Honcoop
on October 25, 2023

Software pilot programs are traditionally focused on whether the software can do the job. However, AI introduces new challenges including ethical considerations and data biases. The question now is whether AI should do the job in the way it was designed to. And due to the data-driven nature of AI, ensuring data quality, control, and accuracy are vital to the AI software pilot success. 

Specifically, I will walk you through through the AI software use case of automated knowledge management to support answering employees’ burning questions. 

Setting Up an Internal AI Task Force 

AI has the potential to bring enormous productivity gains to organizations, but we also recognize the technology is not without its risks and issues. Therefore, it’s worth establishing a cross-functional task force with representatives from IT, security, HR, legal, risk management, compliance, and other functions to explore use cases for the technology inside your organization; to evaluate the pros, cons, and security of different generative and verify AI tools; and to source training for employees. 

In our employee support use case, the AI Task Force reviews its HR employee life-cycle processes. The data collected through the employee journey audit shows opportunities for improvement (like leave management, employee tier 0-1 support, state policy compliance, collection, onboarding, and coaching). The task force can now discuss actionable ways these processes can benefit from AI. Imagine the task force prioritized improving employee tier 0-1 support. The task force researches and identifies AI intelligent assistant solutions. To ensure the solution meets the business needs, especially compliance with AI policies, a risk-free next step is conducting a supervised pilot. 

What Is a Software Pilot?

Before we dive into the intricacies of running a successful AI pilot program, let’s start with the basics. What exactly is a pilot? A pilot is like a test flight for a new idea or technology. The task force has validated the concept and needs to confirm the solution in the real world. It’s a focused and supervised experiment to address a specific challenge. A pilot is time-bound. During a pilot, select a specific control group, identify a problem to solve, and confirm how well the technology solution works. Think of it as a mini-launch. A successful pilot provides valuable insights, helps refine strategies, and sets the stage for broader implementation. 

When To Consider an AI Software Pilot? 

With the novelty of AI, consider conducting an AI software pilot to collect data around specific conditions and to strengthen the business ROI. Especially solutions that impacts the workplace and employee experience. The pilot addresses a persistent and recurring issue that requires ongoing human attention. Helps validate compliance with company policies and procedures around  AI, security, and data privacy.  

Let’s return to our employee support use case, the task force noticed a higher rate of employee tier 0-1 issues and dissatisfaction during onboarding in the data. These issues are consistent and require time from IT and HR team members to resolve.  

Scenario Description: 

The ACME company is about to finalize a large merger & acquisition and needs to onboard a large group of employees in Canada. The task force drafts a hypothesis that an AI assistant solution would reduce team member time and increase employee satisfaction and support while onboarding. Now it’s time to the collect data and strengthen the business case metrics that an AI Assistant would indeed increase operational efficiecies and employee sentiment.

Pilot Prerequisite Checklist  

As the task force stands ready to embark on the AI software pilot, there are essential prerequisites that must be established. This checklist serves as your guide through every phase of the pilot, from setup and monitoring to its successful completion. Think of it as a trusted chart, thoughtfully outlining the steps and considerations that will steer your AI pilot toward success.  

Executive sponsor ✅

Find a sponsor with the power to make decisions, who knows the business goals, and who is motivated to find solutions. They are high enough in the company’s hierarchy to access resources and be highly committed to addressing the problem. The sponsor comprehends the pilot program’s goals and provides consultation. In our employee support use case, an executive tied to the success of the merger might be the best selection.

Project team ✅

The task force is responsible for identifying the right project manager and cross functional team members. The project manager is responsible for working with the various stakeholders involved in the project to create a plan that works for all parties involved. Key stakeholders in our example could include employees, managers, IT, HRBPs, facilities, payroll, benefits, procurement, and legal. The project manager will also closely partner with the vendor’s customer success manager throughout the pilot. The team assess the current situation, identifies potential risks, and develops strategies to address them. The project manager should provide ongoing feedback to the executive sponsor and stakeholders address any issues promptly. 

Control group ✅

Choose the group experiencing the most significant challenges, such as employees with high stress levels or struggling to meet their productivity targets. Carefully select a group that is representative of the broader workforce, so results of the pilot can be generalized beyond the control group. In this example, we will be using the group in Canda, which is representative of the remaining workforce located currently in the US.  

Pre-mortem ✅

Hold a pre-mortem to anticipate potential issues before the pilot. Identify potential roadblocks that could prevent the successful completion of your pilot project. These problems can be resource or time constraints, not understanding expectations or goals for success, or different stakeholders having different interests.

Scope ✅

Set clear boundaries for your pilot project. Keep the scope limited to a few pressing problems and set a timeline for completion. Choose issues that are both relevant and have significance for the larger organization. Once the scope is determined, set a timeline for the pilot. Factor in how long it will take to design and implement the pilot so you can measure success. Give enough time to measure and learn from the impact to refine and improve the approach. Be realistic about the timeline, and don’t rush the process. The pilot will focus on onboarding, IT, benefits, payroll, and HR employees to support level 0-1 questions for four months in Canada. 

Goals & objectives ✅

Set clear, realistic, measurable, and achievable goals for the pilot. The goals should be aligned with the overall business objectives of the company. In our example of an employee AI  assistant, the goal could be an increased understanding of employee digital workplace interactions. Accuracy of answers provided. Increased ability to review & edit FAQs. Improve access to knowledge documents. Reduce HR & IT time answering tier 0-1 employee onboarding support questions. 

Metrics of success ✅

Consider metrics relevant to user experience, accuracy, and time savings. They should be realistic and achievable in the timeframe of the trial. AI employee assistant metrics could be: reduce customer service tickets by a certain percentage. Increase employee satisfaction ratings, improve employee usage, or improve answer accuracy by a certain amount. Everyone should agree with these metrics in advance. Once the metrics are agreed upon, the company should create a timeline for achieving those metrics.

Project plan ✅

User adoption and training take center stage: a robust plan places employees at the center. This helps employees not only understand the new tool but also embrace it. The plan includes all the elements of a regular project launch but in a microformat. The timeline should also estimate the time each milestone should take to complete, thus informing the full launch timeline. The project plan may have the following milestones: Kick-off, setup, testing, training, launch, monitoring, measuring, refining, and project conclusion. Create communication, feedback channels, training, and documentation activity steps to support the pilot plan. Establish regular status updates with the Executive Sponsor to review the progress of the pilot plan. Recognize achievements and identify areas for refinement.  

Data Quality ✅

Quote Image from Jenny Cotie Kangas who talks about the impact AI will have on running an effective AI software pilot.

Data must be accurate, valid, and current to implement AI. It should include checks for completeness, accuracy, and relevance. Companies should also consider the potential biases that may be present in the dataset and take steps to mitigate such biases. Data privacy and security standards should be set to protect sensitive data. 

Vendor Support During Pilot 

When selecting an AI software vendor, focus on key areas and themes that will shape the success of this partnership. The following checklist provides an overview of questions and considerations leaders should address during the selection process. For detailed insights and guidance, please refer to the checklist provided below.  

SUPPORTDESCRIPTION
Technical ExpertiseDelve into the technical capabilities, external audits, training methods, and potential limitations of the AI solution. 
Data Handling and PrivacyInquire about data security, privacy compliance, and data management practices. 
Scope and ScalabilityUnderstand the technical scope and scalability options to align with the requirements and objectives of the pilot. 
Accuracy and PerformanceAssess the expected accuracy and performance, particularly in addressing the problems you are seeking to resolve in the pilot. 
Monitoring and AnalyticsExplore tools for monitoring and measuring AI performance and the availability of data analytics support during the pilot. Confirm ability to pull an audit trail of specific questions asked and the responses provided with a date and time stamp.
Training and User AdoptionInvestigate the AI training process and resources for user adoption among HR teams and employees for the pilot. 
Customization and IntegrationDetermine the flexibility of customization and integration capabilities with existing workplace systems.
Cost and ROIUnderstand the pricing structure, potential hidden costs, and projected return on investment. 
Support and MaintenanceEvaluate available support and maintenance services, including the process for issue resolution during the pilot. Understand the customer success role and responsibilities to support you during the pilot project plan. 
Future ReadinessAssess the AI solution’s adaptability to changing needs and industry advancements. 
Client ReadinessSeek references and customer stories from organizations where the AI solution has been successfully deployed in use-case specific contexts.
Ethical ConsiderationsAddress ethical concerns, including bias mitigation and ethical operation. 
Exit StrategyUnderstand the plan for discontinuing or transitioning away from the AI solution, if necessary. 

Software Pilot Retrospective: Guiding to Full Launch Experience 

After completing the AI software pilot, it’s time to turn your attention to the next steps. The project manager, in collaboration with the executive sponsor, plays a crucial role in tracking progress and making refinements as necessary. A post-mortem analysis of the pilot’s data becomes the defining moment. It’s a moment to confirm if the solution meets your pilot’s goals and objectives. Did the AI reliably fulfill its intended needs? Do we move forward with a full-scale purchase? If the AI software pilot works as expected and meets or exceeds the success measures, go ahead with a larger roll out. This is the juncture where you leverage these metrics to secure funding and garner stakeholder buy-in.  

There might be instances where the metrics leave room for uncertainty, represented by a cautionary yellow signal. If this happens, consider making refinements. Extend the pilot’s duration, or broaden the roll-out to an additional group to gather more data for conclusive validation. If the metrics show a clear discrepancy between expectations and reality, represented by a red flag, it’s time to halt the pilot. It’s crucial to recognize when a solution isn’t aligning with your goals.