AIImpact.fyi Methodology

This page explains how AIImpact.fyi tracks AI-related layoffs, automation impact, and workforce risk. Our goal is simple: be transparent, consistent, and updateable as better evidence becomes available.

Methodology Principles

AIImpact.fyi is designed as a public evidence tracker, not a rumor feed. Every event should map to at least one source, clear classification logic, and a consistent counting method for estimated jobs affected. We prioritize traceability over speculation.

  • Use explicit inclusion rules for AI layoffs and automation impact.
  • Store event-level and role-level impact where possible.
  • Keep historical entries, including updates and corrections.
  • Expose confidence levels so users can judge data quality.

What Counts as AI Layoffs

An event is classified as an AI layoff when available evidence indicates that AI adoption, AI-enabled productivity, or AI-led restructuring was a direct or major contributing factor in a workforce reduction.

Included as AI layoffs

  • Official announcements linking headcount reductions to AI programs.
  • Earnings calls or executive statements citing AI-driven efficiency gains before cuts.
  • Operational changes where AI systems replace previously human-owned workflows.

Not included as AI layoffs

  • Layoffs with no stated or evidenced AI relationship.
  • General macroeconomic cuts without AI-specific rationale.
  • Speculative claims not supported by credible documentation.

What Counts as Automation Impact

Automation impact captures workforce effects from AI systems even if no immediate layoff is announced. This includes hiring freezes, role redesign, backfill cancellation, and measurable reduction in labor demand due to AI tools or agents.

Included automation impact signals

  • Hiring freeze tied to AI deployment.
  • Back-office or support process replacement by AI systems.
  • Documented role consolidation after AI workflow rollout.
  • Publicly disclosed reduction in external staffing linked to AI automation.

How Jobs Affected Are Counted

We store jobs affected as an event-level estimate in `events.jobs_affected`, and optionally as role-level estimates in `event_roles.jobs_affected` when role granularity is known.

Counting hierarchy

  1. Use explicit company-reported counts when available.
  2. Use credible reported estimates when official totals are unavailable.
  3. Use conservative inferred ranges only when evidence supports role-level impact patterns.

Important interpretation notes

  • Jobs affected includes layoffs, unfilled planned roles, and role displacement tied to AI automation.
  • Role-level counts may not sum perfectly to event-level counts due to partial disclosure.
  • Counts are updated when better sources or company clarifications appear.

Data Sources

Every event in AIImpact.fyi can include one or more linked sources in `event_sources`. We prefer primary evidence and use secondary reporting as supporting context.

Source priority order

  1. Company statements, filings, earnings calls, and investor releases.
  2. Reputable business/industry reporting with named sourcing.
  3. Government filings and labor disclosures where applicable.
  4. Credible interviews or transcripts from executives and workforce leaders.

Source handling rules

  • Track multiple sources per event when possible.
  • Record publication date and source domain for auditing.
  • De-prioritize anonymous, unsourced, or purely speculative claims.

Confidence Score System

Each event can store a confidence score between 0.00 and 1.00 (`events.confidence_score`). The score reflects evidence strength, consistency, and data completeness.

Score bands

  • 0.85 to 1.00 (High confidence): direct company documentation or multiple strong primary sources with consistent counts.
  • 0.60 to 0.84 (Medium confidence): credible reporting with partial detail or minor source disagreement.
  • 0.30 to 0.59 (Low confidence): incomplete evidence, indirect attribution, or limited count reliability.
  • Below 0.30 (Watchlist): preliminary signal retained for monitoring, not strong enough for firm conclusions.

Confidence is not a statement of intent or ethics. It is a data quality indicator for how strongly current evidence supports the event classification and impact estimate.

Classification Workflow

  1. Collect candidate event and source links.
  2. Validate source credibility and publication context.
  3. Classify event type (layoff, automation, hiring freeze, restructuring, other).
  4. Assign company and optional role mappings.
  5. Estimate jobs affected using counting hierarchy.
  6. Assign confidence score and publish with traceable sources.

Known Limitations

  • Companies differ in disclosure quality and timing.
  • Some AI impact appears as slower hiring rather than explicit cuts.
  • Role-level impact is often underreported compared with company-level totals.
  • Cross-country reporting standards are inconsistent.

FAQ

Do you track only layoffs?

No. We also track hiring freezes, role displacement, and automation-led demand reduction when evidence supports AI attribution.

Can an event have multiple sources?

Yes. One event can include multiple supporting source links, which improves auditability and confidence scoring.

Will old records change?

Yes. We update records when newer filings, statements, or corrections improve event accuracy.