OpenAI publishes California privacy-rights stats: millions of access/deletion requests, sub-72-hour responses

OpenAI quietly posted transparency stats covering global privacy requests for 2024—spanning access, deletion, and correction. The headline: millions of requests processed, with average response times under 72 hours across categories.

  • Access requests: ~1.63M received; ~1.63M completed in whole or part; ~534 denied; average response under 72 hours.
  • Deletion requests: ~752K received; ~739K completed; ~12.7K denied; average response under 72 hours.
  • Correction requests: ~76.6K received; ~75.6K completed; ~1,066 denied; average response under 72 hours.

What these numbers say—and don’t

The cadence suggests two things: (1) OpenAI’s Privacy Portal and in-product flows have scaled beyond manual processing for most DSARs; and (2) request volume tracks ChatGPT’s monthly active user base and enterprise pilots. But the report doesn’t break down response times by geography, the share of API vs consumer requests, or the rate of re-opened tickets—metrics that regulators increasingly want.

Why this matters for AI deployment in 2025

Strong DSAR throughput is becoming a procurement checkbox. Enterprises need to demonstrate lawful basis, retention limits, and subject rights across vendors. A published DSAR histogram—paired with policy assertions that OpenAI doesn’t “sell” personal information or use sensitive data to infer characteristics—reduces friction in security reviews, even if it doesn’t answer every auditor’s question.

The regulatory horizon

California’s CPRA is converging with emerging AI-specific rules elsewhere: targeted risk assessments, model-specific logging, and stricter data minimization. Expect future reports to include more granularity (e.g., segmentation by authenticated vs unauthenticated requests, fraud rates in identity verification) and cross-references to model-specific retention policies, particularly for voice and image inputs as Realtime APIs proliferate.

What buyers should ask now

  1. Retention defaults: How long are chat logs and uploaded files kept for each plan? Can you opt-out of training?
  2. Vendor-side lineage: If agents plug into third-party tools, how are those logs handled for DSARs?
  3. Data residency: Are enterprise workspaces geo-fenced? What’s mirrored across regions?

For practical tuning and privacy hygiene on your own rigs, start with our safe tuning playbook (XMP vs EXPO) and how much VRAM you really need for AI workloads.


Be the first to comment

Leave a Reply

Your email address will not be published.


*