A Practical Path to AI Adoption

By Chris Loope
Share
Staffing firms don’t need to go all-in on AI to get results—they need to get intentional. A crawl, walk, run approach lets you unlock immediate productivity gains while building the governance needed for higher-stakes decisions.

In my last article for Staffing Success (“Responsible Recruiting in the Age of AI” in the July–August 2025 issue), I explored the risks of artificial intelligence in hiring: bias, transparency gaps, legal exposure, and the need for governance. The response made one thing clear: Staffing professionals understand the risks; what they want now is a practical starting point.

The answer isn’t to adopt everything at once, but rather to match your AI ambition to your governance readiness. A crawl, walk, run framework helps firms start generating value today while building oversight for higher-stakes applications. This isn’t a rigid maturity model; it’s a governance scaffold that matches oversight to risk, with room to iterate at every stage.

The Crawl, Walk, Run Framework

The core principle is simple: As AI moves closer to decisions that affect people’s careers, governance must expand proportionally.

Crawl: Productivity. Most firms start here, and it’s usually the smartest entry point. Crawl-stage tools handle tasks that consume recruiter hours without touching candidate decisions. Sourcing, outreach drafting, job description writing, and customer relationship management data cleanup all fall here.

LinkedIn’s 2025 Future of Recruiting report found that teams integrating generative AI saved roughly 20% of their work week. The risk profile is low, and the return is immediate.

Governance at this stage doesn’t require a formal program. Form an AI team that includes your early adopters and at least one skeptic, set basic usage policies and data handling expectations, and build a habit of sharing what works. That experimentation culture becomes the foundation for everything that follows.

Walk: Workflow-Integrated. Here, AI moves from supporting recruiters to influencing how candidates are evaluated. Matching, ranking, market mapping, compensation benchmarking, and automated screening all live at this tier. These tools create real efficiency but also introduce risk that scales with every search.

As I evaluated AI tools in a prior role, I kept encountering scoring rubrics with no explanation of how the model arrived at its rankings. My test was straightforward: What would we tell candidate number six on a top five list about why they didn’t make the cut? Many vendors had no answer. Governance at this tier means vetting vendors for that kind of explainability, conducting periodic bias checks, and requiring that your team can defend a recommendation to a client or a candidate. As Ben Eubanks, chief research officer at Lighthouse Research and Advisory, put it: If a vendor can’t tell you how their AI works, you can’t use them.

Run: Strategic and Decision-Layer. At the most advanced tier, AI contributes to consequential decisions. At this level, predictive candidate success scoring, AI-driven client intelligence, and assessment integration all carry significant value, but also significant exposure.

The Eightfold AI class-action lawsuit filed in January 2026 alleges that AI-generated candidate scores constitute consumer reports under the Fair Credit Reporting Act. The claim is that candidates were scored without consent, disclosure, or the ability to dispute results.

The Consumer Financial Protection Bureau’s withdrawal of its 2024 algorithmic scoring guidance complicates the regulatory picture, but the underlying statutory theory remains live.

Meanwhile, the Colorado AI Act is set to take effect June 30, 2026, assuming no further amendments in the current legislative session. The law requires impact assessments, transparency notices, and documented risk management, with penalties of up to $20,000 per violation. At this tier, treat governance as mandatory: full audit trails, human oversight protocols, legal review, and incident response plans.

Governance Scales With Risk

The AI playbook I outlined in my first article (read it in the July–August 2025 issue at americanstaffing.net/staffing-success) remains the governance backbone: purpose, principles, risk assessment, validation, human oversight, candidate communication, monitoring, and incident response. The crawl, walk, run framework maps directly into it. At crawl, you activate a subset. By run, you need it fully operational. A common trap is all-or-nothing governance. Proportional governance gives you a third option: Start where you are and build as you go.

Start With Intention

According to StaffingHub’s 2025 State of Staffing report, AI adoption reached 61%—up from 48% the year before. Bullhorn’s Global Recruitment Insights and Data report findings show that firms using AI are twice as likely to have grown revenue. Yet StaffingHub also found that 32% of users report no measurable impact. That gap often reflects a narrow view of what “impact” means. A tool that saves a recruiter 5% to 10% of their day may not feel transformational, but it frees capacity for higher-value work. The difference is rarely the tool; it’s leadership engagement, process alignment, and celebrating quick wins.

Employees are adopting AI rapidly, with or without a plan. The firms that will lead aren’t the fastest adopters—they’re the ones building governance and education at the same pace as adoption. That means choosing tools you can explain, setting clear boundaries for where AI informs versus decides, and reviewing outcomes often enough to catch problems early. Not every tool on the market was built for hiring’s level of scrutiny, and it’s on buyers to ask the right questions. Start where the value is clear, build oversight as you go, and keep the focus where it belongs: on the people whose careers these tools affect.


Chris Loope is chief strategy officer at BGSF and also founder of Pedagogue Systems. He is a technology strategist and innovation leader with more than 20 years of experience in the staffing industry. As a member of the ASA technology taskforce, Loope is actively shaping AI governance policies and advocating for transparent, ethical AI adoption in workforce management. Send feedback on this article to .

<span class="publication-name"><em><em>Staffing Success Magazine</em></em></span> <span class="publication-separator">-</span> <span class="publication-issue">March-April 2026</span>
Originally Published In

Staffing Success Magazine - March-April 2026

Traditional recruiter activity alone won’t cut it in a tough economy. The firms pulling ahead are redefining productivity—using data, technology, and sharper metrics to turn effort into results and uncover early signals of growth.