Designing Data Integrity for an Internal LLM Platform
Preventing false AI insights by identifying upstream analytics ingestion failures
Details are intentionally abstracted due to confidentiality.
Public summary
I identified a high-risk “silent failure” where corrupted analytics data could still upload successfully, producing fluent but unreliable AI insights. The core issue lived at the data boundary: CSV structure and context were lost during ingestion, leaving the model to reason over malformed inputs. I partnered cross-functionally to make the failure legible and to push for validation guardrails and meaningful user feedback. The impact was preventative—protecting decision quality and trust in an internal AI tool. The deeper workflow details are gated for interviews.
TL;DR
GatedRole
Contributing UX Designer — AI & Data Systems
Scope
Analytics ingestion, data validation risk, AI input integrity
Problem
Malformed analytics data could produce confident but incorrect LLM outputs without visible errors
Impact
Prevented silent failure by surfacing ingestion risk and advocating guardrails for trust and decision safety