Last updated: 2026-02-18
By Annelie Van Zyl — 🇿🇦 🇺🇸 🇬🇧 Chief Operating Officer 🦄
A free, comprehensive readiness assessment across governance, platform architecture, data quality, people and delivery, delivering a prioritized roadmap to fix gaps and scale AI with confidence.
Published: 2026-02-18
Obtain a quantified readiness score across five pillars and a prioritized roadmap to fix gaps and scale AI initiatives.
Annelie Van Zyl — 🇿🇦 🇺🇸 🇬🇧 Chief Operating Officer 🦄
A free, comprehensive readiness assessment across governance, platform architecture, data quality, people and delivery, delivering a prioritized roadmap to fix gaps and scale AI with confidence.
Created by Annelie Van Zyl, 🇿🇦 🇺🇸 🇬🇧 Chief Operating Officer 🦄.
- Chief Data Officers and AI leaders evaluating whether their data foundation can scale AI initiatives, - Data and IT leaders responsible for governance, architecture, and data quality aiming to identify hot spots before AI deployment, - Business leaders tasked with avoiding costly AI pilots by validating readiness early
Basic understanding of AI/ML concepts. Access to AI tools. No coding skills required.
Free diagnostic across 5 pillars. Prioritized gaps with ROI opportunities. Fast, actionable insights
$1.00.
The Data AI Readiness Checker is a free, practical diagnostic that produces a quantified readiness score across five pillars and a prioritized roadmap to fix gaps and scale AI. It is designed for Chief Data Officers, Data and IT leaders, and business leaders who need a fast, actionable check; it saves about 3 hours and delivers a $100-value diagnostic at no cost.
The Data AI Readiness Checker is a compact assessment kit combining templates, checklists, scoring frameworks, and execution workflows that evaluate governance, platform architecture, data quality, people and delivery, and AI readiness. It bundles diagnostic questions, remediation playbooks, and a prioritized gap list tied to ROI opportunities and the highlights of a fast, actionable insight set.
Included artifacts: assessment questionnaire, evidence checklist, remediation templates, prioritization matrix, and short execution sprints to convert score into a roadmap.
AI projects fail not at the model but at the foundation; this tool surfaces exactly where the foundation is weakest so teams can avoid costly pilots and focus delivery on real ROI.
What it is: A scoring model that converts qualitative evidence into a single readiness score across five pillars.
When to use: Use on intake to baseline an organization before any AI pilot or procurement decision.
How to apply: Collect evidence, apply standard weights per pillar, normalize scores, and generate a ranked gap list with suggested fixes.
Why it works: Reduces bias, creates a repeatable baseline for measuring improvements and prioritizing high-ROI fixes.
What it is: A modular set of checklists and runbooks for common failure modes in governance, architecture, and data quality.
When to use: After scoring identifies high-impact gaps or recurring issues across systems.
How to apply: Map each gap to a remediation module, assign owners, run a focused half-day sprint, and track closure criteria.
Why it works: Breaks large problems into prescriptive, measurable tasks that teams can execute without reinventing solutions.
What it is: A structured method to copy proven foundation patterns—governance adoption flows, data contract templates, and architecture blueprints—into another team or domain.
When to use: Use when one business unit demonstrates high readiness and you need to replicate that success elsewhere quickly.
How to apply: Capture the winning pattern, codify configuration and decision points, run a 3-step copy process (inspect, adapt, enforce), and measure parity improvements.
Why it works: Reuses field-proven fixes rather than inventing new ones, accelerating reliable scaling across teams.
What it is: A progressive checklist for adding monitoring, lineage, and alerting across the data lifecycle.
When to use: Use when data quality issues persist or provenance is unclear during model deployment prep.
How to apply: Implement minimal telemetry first, then add lineage and automated alerts tied to SLA breaches and business KPIs.
Why it works: Ensures incremental investment; teams get value early and can expand observability where it matters most.
Start with a focused half-day assessment (time required: half day) led by a cross-functional team with intermediate effort level. Use the roadmap to move from score to a prioritized 60–90 day plan.
Inputs should match skills required: data governance, platform architecture, data quality, AI strategy, and analytics.
These are the frequent operator trade-offs that break momentum; each mistake includes a concrete fix.
Positioned for operators who must validate AI readiness quickly and turn assessment results into an actionable roadmap across data, platform, and delivery.
Turn the checker into a living operating system with integrations, clear ownership, and repeatable cadences.
This playbook was created by Annelie Van Zyl and is maintained as part of a curated library of operational playbooks. It sits in the AI category and is intended to be a practical, non-promotional artifact for teams that need to validate readiness quickly.
Reference implementation and additional artifacts are available at the internal link: https://playbooks.rohansingh.io/playbook/data-ai-readiness-checker. Use that repository to pull templates and evidence checklists into your workflow.
Direct answer: it evaluates five foundational pillars—strategy and governance, platform and architecture, data quality and lifecycle, people and delivery, and AI readiness—using a mix of checklists, templates, and scoring to identify prioritized gaps and near-term remediation work.
Direct answer: run a half-day intake with a cross-functional team, gather evidence, apply the scoring template, and convert the ranked gap list into 1–3 focused sprints. Assign owners, apply the Impact Score heuristic, and re-score after each cycle.
Direct answer: it is plug-and-play for baseline assessments but expects light customization to reflect your business metrics and system topology. Use the provided templates immediately and adapt weights or acceptance criteria to match your context.
Direct answer: unlike generic templates, this checker ties each gap to operational runbooks, ownership, and ROI-oriented prioritization. It focuses on execution mechanics—source fixes, pattern copying, and delivery cadences—rather than abstract best practices.
Direct answer: ownership is typically shared—Data or AI leadership holds strategic responsibility while a delivery owner (platform or analytics lead) runs the assessment cadence and executes remediation sprints with cross-functional support.
Direct answer: measure success by re-scoring the five pillars, tracking closure of prioritized gaps, and monitoring business KPIs tied to the Impact Score formula. Report improvements in score, reduction in incidents, and time-to-production for AI initiatives.
Discover closely related categories: AI, No Code and Automation, Operations, Education and Coaching, Growth
Industries BlockMost relevant industries for this topic: Artificial Intelligence, Data Analytics, Cloud Computing, Software, FinTech
Tags BlockExplore strongly related topics: AI Strategy, AI Tools, LLMs, AI Workflows, No Code AI, Automation, Analytics, APIs
Tools BlockCommon tools for execution: OpenAI, n8n, Zapier, Looker Studio, Tableau, Metabase
Browse all AI playbooks