Rippling
Runlog AI

Payroll Investigation Pilot

Reduce evidence review in high-stakes payroll investigations with versioned case memos that preserve understanding across updates and maintain full auditability.

3 weeks
25–50 historical cases
Read-only (no integration required)
Target: ≥30% faster resolution
Target: ≥40% fewer items opened

Verification is the Bottleneck

As AI improves, teams still spend time re-deriving truth and re-checking evidence because mistakes are rare but costly.

Payroll investigations (e.g., "employee did not get paid") are high-stakes and time-sensitive: missed details can result in compliance violations and employee impact.

Investigators open large portions of evidence (attachments, payroll records, policy docs) to bound risk, but review effort scales with volume.

When new evidence arrives mid-investigation, teams re-summarize and re-check from scratch, slowing resolution and increasing rework.

Current Bottleneck

Investigators perform manual evidence gathering across payroll systems, tickets, and records. Each update requires re-checking from scratch, and audit trails are fragmented.

Atlas Solution

Atlas maintains a versioned investigation memo with facts, evidence citations, contradictions, and the smallest must-verify evidence set. Investigators resolve faster with a complete, auditable record.

Security & Data Governance

This pilot is designed to align with a joint legal/security/procurement framework before touching critical workflows.

Pilot guardrails (default)

  • Read-only pilot on historical cases (no production actions)
  • No training on Rippling data; configurable retention/deletion
  • Role-based access; audit logs for every access and export
  • Option to run on a customer-approved LLM provider (e.g., Bedrock/Vertex/Azure OpenAI) depending on Rippling’s preference

We can tailor controls to Rippling’s internal AI pilot review process; the pilot does not require deep integration.

Pilot Overview

3 weeks to validate review reduction with guardrails

Atlas does not replace existing tooling. It ingests a case packet export and returns a versioned case memo your team can use in the same workflow.

Investigation Packet Input

Export investigation tickets + payroll records + attachments

Versioned Investigation Memo

Facts + evidence citations + contradictions + gaps + change log

Must-Verify Evidence Set

Smallest set of evidence to review for confident resolution

Select a recurring case type (e.g., “employee didn’t get paid” investigations).

Provide 25–50 historical cases (ticket thread + attachments + relevant exports/policies).

Atlas generates: facts ledger, contradictions/gaps, must-verify checklist, and change log.

Reviewers use Atlas outputs to reduce how much they open while maintaining correctness.

We measure time-to-resolution and review coverage; if metrics don’t improve, we kill the pilot.

Metrics Tracked

Primary: review effort reduction with preserved correctness

Time-to-Resolution

Median time to reach a confident decision on a case.

Baseline

Current baseline (Rippling-owned)

Target

≥30% faster

Review Coverage

% of items opened (attachments, record exports, long threads).

“Item” is a unit Rippling already counts (attachment opened, record export viewed, etc.).

Baseline

Current baseline

Target

≥40% fewer items opened

Escalation Rate

Cases escalated due to uncertainty/rework after initial decision.

Baseline

Current baseline

Target

Meaningful reduction

Week-by-Week Structure

3 weeks to decision

Week 0

Setup + Definitions

  • Pick one case type and define “item opened” + “resolved” + “escalated”
  • Agree on pilot guardrails and data handling
  • Define export format for case packets (ticket + attachments + relevant exports)

Output: Signed-off pilot plan + data schema

Week 1

Baseline + First Run

  • Run Atlas on first batch of historical investigations
  • Generate versioned investigation memos + must-verify evidence sets
  • Compare review coverage and time-to-resolution vs baseline

Output: First-run metrics + investigation memo examples

Week 2

Iteration + Second Run

  • Incorporate reviewer feedback (what was missing / misleading)
  • Run on second batch or inject “case updates” to test change logs

Output: Delta metrics + updated artifacts

Week 3

Results + Go/No-Go

  • Compile report, identify best-fit teams, define next scope if positive

Output: Go / no-go decision + rollout plan

Cost & Engagement

3-week pilot investment

Includes

  • Pilot setup + investigation packet schema alignment
  • Versioned investigation memo + must-verify evidence set generation
  • Evidence review coverage + time-to-resolution instrumentation
  • Change log evaluation (investigation update injection or second batch)
  • Security/retention configuration + LLM provider options
  • Weekly review + iteration loop with investigation team
  • Final report + go/no-go recommendation + auditability assessment