Candidates

Companies

Candidates

Companies

How to Build an AI Recruiting Stack

By

Samantha Cox

Illustration of people analyzing charts, factory systems, mobile tech, and data dashboards, symbolizing the wide range of modern career fields and how to evaluate them.

Most "AI recruiting tool" roundups read like vendor brochures. This one doesn't. What follows comes from a working breakfast with TA and HR leaders at companies ranging from 20-person robotics startups to established streaming platforms. The whole conversation kept circling back to one question. What does a modern recruiting stack look like when you strip out the tools you're overpaying for?

The short version is that a sourcing tool, an ATS with native AI, and a general-purpose AI assistant can handle what used to require a $12K/year LinkedIn Recruiter seat, a $10K scheduling add-on, and three browser tabs of enrichment tools. Here's how recruiters are actually building that stack.

Start with the sourcing layer

LinkedIn Recruiter is the tool everyone has and nobody loves. At roughly $12K per seat per year, it's the biggest line item in most recruiting budgets, and the value proposition has stagnated. You're paying for search and InMail. That's it.

The alternative gaining the most traction is Juicebox. It pulls from a massive candidate database, handles enrichment, and costs a fraction of a Recruiter seat. However, recruiters report a data lag of 60+ days, which means the profiles you're sourcing might already be off the market. LinkedIn is still the truth-check for whether someone is actually at the company their Juice Box profile says they're at.

Noon.ai is a newer one worth watching, particularly if you're plugged into VC talent networks. It's built for the startup recruiting use case and gaining traction fast among early-stage talent teams.

For teams that evaluated more broadly, Go Perfect landed as "okay," and Jack and Jill (a UK-origin platform expanding to SF) got dinged for weak integration with the Bay Area talent pool.

The takeaway here isn't that one tool replaces LinkedIn. A $200/month sourcing tool plus a willingness to cross-reference can get you 80% of the way there for teams under 500 people.

Pick an ATS with AI built in

The ATS conversation kept circling back to one theme. Native AI features versus paid add-ons. Two recruiters at the table were mid-migration from Greenhouse to Ashby, and their reasoning was almost identical.

Greenhouse locks its AI candidate ranking behind an add-on that runs roughly $10K per year. Fraud detection is another add-on. Ashby includes both natively, with AI-powered candidate ranking, built-in fraud signals like IP location and resume formatting anomalies and email creation date, and reporting that doesn't require a BI tool to interpret.

One recruiter shared that Greenhouse paid out the remainder of their contract to release them early, which says something about how the competitive dynamics have shifted.

Ashby's API and integration layer also stood out. For a team of around 20 people, it connects cleanly to the rest of a modern stack without requiring a dedicated ops person to maintain the plumbing.

The biggest migration risk has nothing to do with the tool itself. Hiring managers, referrers, and interviewers all need to learn a new system, and their tolerance for change is low. The recommendation that landed was to run a mandatory live training session, record it, and distribute an AI-generated summary. Don't assume anyone will read documentation.

If you're comparing ATS platforms, the usability ranking from recruiters who've used all three goes Lever (easiest to pick up), Ashby in the middle, and Greenhouse as the most cumbersome.

Use a general-purpose AI assistant as the connective layer

This is where the stack gets interesting. Instead of buying five point solutions for sourcing, outreach, scheduling, note-taking, and scorecards, a growing number of recruiting teams are using Claude or a similar AI assistant as the layer that ties everything together.

Here's what the workflow looks like in practice:

When a new role opens, the recruiter dumps the intake document, any relevant call recordings, and role context into a Claude project. This gives the AI enough context to understand the role deeply, including the nuances of what the hiring manager actually wants beyond what's in the job description.

From there, Claude fires up Juicebox (or your sourcing tool of choice), builds search filters based on the intake context, and drafts outreach cadences. The recruiter reviews and approves. Edge cases or unusual candidate profiles get flagged for human judgment.

On the inbox side, a Claude connector processes daily candidate replies, drafts responses, and routes candidates through the pipeline. Town, an email drafting tool) handles the assist layer for outreach that needs a more personal touch.

Automated workflows also push a morning brief to Slack every day with who you're meeting, what to ask them based on their background, and any flags from previous touchpoints.

After every screen, the call recording gets fed into AI, which generates a scorecard and routes it to both the ATS and a dedicated Slack channel so the hiring manager sees it without the recruiter having to write it up manually.

The result is that a recruiter running eight screens in a day spends roughly four hours on active work. The rest is handled by the stack. The recruiter only intervenes at high-judgment moments, which is where their expertise actually matters.

For teams not yet using this kind of workflow, the entry point is lower than you'd think. A sourcing tool seat, an Ashby subscription, and Claude at roughly $100/month total can get a small team running a version of this stack without a major procurement process.

Fraud detection still needs human creativity

AI can flag suspicious applications. Ashby catches metadata-level signals like mismatched IP locations, unusual resume formatting, and recently created email addresses. But no tool discussed at the table has solved fraud detection at the live screening stage.

Recruiters are filling that gap with manual tactics that are surprisingly effective. One approach is to intentionally misstate a candidate detail during the screen, like mentioning the wrong city, and watch how they respond. A real candidate corrects you immediately. A fraudulent one either agrees or hesitates.

Other tactics that are working include front-loading application questions that require candidates to confirm they've read the job description (which filters out spray-and-apply bots) and recording all screening calls so inconsistencies can be documented and reviewed.

For customer-facing roles like account executives, the sensitivity around AI-first screening is even higher. One recruiter trialed RightHire, an AI phone screener, for an SDR role. Out of six or seven candidates routed through it, only one completed the screen. The rest dropped off, unwilling to "talk to AI" as a first interaction with a potential employer. The lesson there is pretty clear. Use AI to prepare for and process screens, not to replace the human conversation itself.

Know where the legal lines are

California and Florida both restrict the use of AI in final hiring decisions. You can't let an algorithm make the call on who gets hired. But you can use AI to generate shortlists, rank candidates, draft scorecards, and flag risks, as long as a human makes the final decision.

This distinction matters for how you architect your workflow. AI handles volume and prep work. Humans handle judgment calls. That's not just a legal requirement. It's better recruiting.

The conference sourcing use case

One unexpected application that came up was using AI to process conference attendee lists. A robotics company was preparing for CVPR and needed to identify which attendees were commercial engineers versus academics. Previously, this took five people two weeks. Feeding the attendee list into Claude and asking it to classify based on publication history and employer data compressed that into hours.

If your team sources from conference circuits, this alone might justify building out the AI layer of your stack.

Back-channeling and references

The table also surfaced a few reference-checking tactics worth adding to your process.

The single best reference question anyone shared was to ask for "handling tips." It sounds supportive and innocuous, but it consistently surfaces personality dynamics, management needs, and potential friction points that a standard "would you work with them again" question never will.

For sales hires specifically, ask the reference directly whether the candidate hit quota in year one and year two. Don't let them dance around it. And pay attention to tenure. Under two years at an enterprise sales role usually means the candidate ramped, had one bad year, and moved on before the numbers caught up.

Juicebox is also developing a network feature that identifies second-degree connections for faster back-channeling. If that ships as promised, it could cut the time it takes to find a backdoor reference from days to minutes.

What this means for your budget

A traditional recruiting stack for a single recruiter might include LinkedIn Recruiter at $12K/year, Greenhouse with AI add-ons at $10K+/year, a scheduling tool at $3-5K/year, plus enrichment and outreach tools on top. You're easily past $30K per recruiter per year before salary.

The alternative stack runs a sourcing tool like Juicebox (a fraction of a LinkedIn Recruiter seat), Ashby with native AI features included, and Claude as the connective layer. For teams under 500 people, you're looking at roughly $100-200/month per recruiter for tools that cover sourcing, screening, scheduling prep, outreach, and scorecard generation.

The savings don't come from cutting corners. They come from collapsing five point solutions into a general-purpose AI that adapts to your workflow instead of forcing you into someone else's.

This post is based on conversations from AI & OJ, a monthly breakfast series for TA and HR leaders hosted by Fonzi in SF and NYC. If your company is hiring engineers, we'd love to help.