Sign in
HomeProfessional ServicesService EngagementsAI Readiness & Data Foundation
Available NowZent Professional Services · Assessment & Roadmap

Are you actually ready for AI? We'll tell you honestly.

Before you invest in AI tools, pilots, or infrastructure, you need to know whether the foundation exists to make them work. This assessment answers that question and tells you exactly what to fix first.

Fixed-scope engagement across three tiers — from tool safety checks for small teams to production-readiness diagnostics for mid-market pilots. Output is a scored roadmap, not implementation.

Before you invest

Is your data clean, consolidated, and accessible enough for AI to use reliably across your systems?

Do you know what AI tools your staff are already using and what data they are sharing with them?

Have your AI use cases been scored for viability before any budget was committed?

Is your infrastructure production-grade, or sized only for small pilots and experimentation?

Three Assessment Dimensions

What we actually look at.

Every engagement evaluates these three dimensions. Gaps in any one don't prevent AI adoption — they define what foundational work must happen first.

Data Maturity

Whether data is organised, clean, accessible, and governed in a way that allows AI to operate reliably.

Specifically, we look at

  • Where critical data lives across systems — CRM, ERP, storage, email

  • Whether data is consolidated or fragmented across incompatible systems

  • Data quality consistency — duplicate records, missing fields, formatting gaps

  • Data permissions — documented, enforced, and owned

  • Whether there is a single source of truth or competing data versions

Infrastructure Alignment

Whether IT infrastructure is production-grade and ready to support AI workloads, or optimised only for experimentation.

Specifically, we look at

  • Cloud-hosted vs. on-premises workloads — cloud is typically a prerequisite for scaling AI

  • Automated data pipelines vs. manual data movement

  • Security controls applied to systems that would handle AI models

  • Whether architecture can scale to production load or is designed only for small pilots

  • Availability of model deployment, versioning, and monitoring capability

Organisational Capability

Whether the internal team has the skills, governance structure, and leadership alignment to execute AI projects successfully.

Specifically, we look at

  • AI governance structure — defined committee or ad hoc pilot approval

  • Documented policies on approved tools and permitted data flows

  • Defined roles and accountability for AI initiative ownership

  • Internal skills — data engineering, model operations, AI literacy

  • Leadership alignment on success definition and committed budget

Choose Your Starting Point

Three tiers. One question answered for each.

Scope is confirmed during discovery — some organisations receive assessment that spans tiers depending on their maturity and complexity.

Micro-SMB

1–50 employees · 1–2 weeks

"Can we use AI tools safely without leaking company data?"

  • Data accessibility score — is data organised enough for AI tools to use

  • Security readiness checklist — MFA, cloud workspace, single identity directory

  • Shadow AI risk register — what tools staff are already using, what data they're sharing

  • AI acceptable use policy — approved tools, prohibited data types, employee guidance

  • Approved tool deployment roadmap — sequenced: fix this first, then enable these tools

SMB

50–200 employees · 2–4 weeks

"Is this specific AI use case viable, and what needs to be fixed first?"

  • Use-case viability score — is this problem a good candidate for AI investment

  • Data quality and integration assessment — accessibility, quality, governance scored

  • Permission and access audit — find over-shared data before AI tools surface it

  • Build vs. buy analysis — off-the-shelf tools vs. custom development

  • 90-day pilot roadmap with success metrics framework and hero KPI defined

Mid-Market

200–1,000 employees · 4–6 weeks

"We've started AI pilots — why are they stalling, and what's blocking production?"

  • Infrastructure and architecture maturity assessment (0–5 scored)

  • Data governance maturity scorecard (0–5: ad hoc → managed → optimised)

  • Pilot-to-production gap analysis — root causes of why existing pilots stalled

  • AI governance audit — policies, roles, model lifecycle accountability

  • 12–18 month sequenced roadmap — infrastructure first, governance second, scale third

How It Works

Five steps from scoping to roadmap.

Every assessment follows this process. Timeline and depth scale with your tier.

01

Discovery & Scoping

We conduct a scoping call with key stakeholders to understand your current situation, confirm your organisational tier, and define the exact scope. Upon agreement, we issue a statement of work specifying deliverables, timeline, and required access.

  • Scoping call and tier confirmation

  • Statement of work with defined deliverables

  • Preliminary questionnaire for your team to complete

02

Data & Infrastructure Assessment

Technical review of your environment against the three assessment dimensions. We inventory your systems, evaluate how they exchange data, assess infrastructure maturity, and establish a baseline understanding of data quality and governance.

  • System and data inventory

  • Infrastructure maturity baseline

  • Initial gap identification across three dimensions

03

Interviews & Shadow AI Audit

Structured interviews with 3–8 key stakeholders covering decision-making, governance, skills, and leadership alignment. We also conduct a shadow AI audit — asking staff what tools they're already using and what data they're sharing with them.

  • Stakeholder interviews completed

  • Shadow AI risk register — tools, data exposure, risk level

  • Governance and skills assessment

04

Analysis & Scoring

All findings are synthesised and scored across the three dimensions. Specific gaps are identified and categorised by severity and effort to remediate. For Mid-Market, we perform pilot-to-production gap analysis diagnosing why existing pilots have stalled.

  • Scored assessments across all three dimensions

  • Gap list with severity and remediation effort

  • Pilot gap analysis where applicable

05

Roadmap & Readout

Findings are compiled into an assessment report. A phased roadmap sequences recommended work by dependency and business value. A 2-hour executive readout presents findings to your leadership team. All documentation is handed off at the readout — you own the roadmap.

  • Assessment report with scored findings

  • Phased roadmap sequenced by dependency and value

  • 2-hour executive readout with your leadership team

Who This Is For

If this sounds like you, we can help.

The challenge varies by size and maturity. The approach is always specific to your environment.

Micro-SMB

Home Services Company

Wants to use AI tools for customer communication — worried about data leakage and compliance

Assess data consolidation readiness, evaluate security foundation (MFA, cloud workspace, identity directory), audit what AI tools staff are already using, and produce a sequenced deployment roadmap: fix security first, then enable approved tools with data governance guardrails.

Micro-SMB

Professional Services Firm

Wants to use Copilot for document drafting — concerned about client confidentiality

Review M365 deployment and access controls, identify data fragmentation (some files in cloud, some in email, some on local drives), audit current AI tool usage and risks, and produce a configuration roadmap showing what data governance controls must be in place before Copilot is enabled.

SMB

Financial Services Company

Wants AI for loan application processing — unclear whether data is clean enough

Evaluate whether historical data is accessible and consistent enough for AI, assess build vs. buy options, produce a 90-day pilot roadmap, define success metrics, and identify bias risk in historical data before any model is trained.

SMB

Retail or E-Commerce Business

Wants AI for sales forecasting and inventory optimisation — unsure if data supports it

Evaluate whether historical sales data is granular enough (the assessment often finds tracking is at weekly level, but AI needs daily or hourly), assess data quality, score use-case viability, and recommend whether to proceed or reframe the use case for higher value.

Mid-Market

Manufacturer with Stalled AI Pilots

Two pilots (predictive maintenance and quality control) work in development but haven't reached production

Assess infrastructure maturity and discover production readiness gaps, evaluate data pipeline reliability, identify skills gaps in model operations, diagnose specific root causes for each stalled pilot, and produce a sequenced roadmap to bring both pilots to production.

Mid-Market

Financial Services Firm Building AI Governance

Four to five AI pilots running across business units with no governance, no approval process, compliance exposure

Evaluate governance structure (discovers no steering committee), assess consistency of deployment practices across pilots, identify data quality gaps by business unit, and produce a governance framework: steering committee, approval process, model lifecycle policies, data quality standards, and compliance posture.

Responsibility Model

We assess. You decide.

Zent conducts the assessment and delivers findings. You own the decisions — what to implement, when, and with whom.

Zent

We own and execute

Shared

Both teams involved

Customer

You own or provide

01

Pre-Assessment

Scoping, access provisioning, and questionnaire completion.

Scoping call and tier confirmation

We lead — you confirm scope and availability

Zent

Statement of work definition

We draft — you sign off before work begins

Zent

System access provisioning

Read-only — you determine access scope

Customer

Questionnaire completion

We provide the template — you complete it

Customer

Stakeholder scheduling

We propose schedule — you confirm availability

Shared
02

Assessment Execution

Technical review, stakeholder interviews, shadow AI audit.

System audit and technical review

Read-only access to your environment

Zent

Stakeholder interviews

3–8 interviews depending on tier

Zent

Shadow AI risk audit

Staff interviews on current AI tool usage

Zent

System access and stakeholder availability

You ensure access and people are available

Customer

Context and clarification

You answer questions about business context

Customer
03

Findings & Handoff

Scoring, roadmap, executive readout, and documentation transfer.

Assessment scoring and gap analysis

Scored across all three dimensions

Zent

Phased roadmap production

Sequenced by dependency and business value

Zent

Executive readout (2 hours)

Findings presented to your leadership team

Zent

Readout attendance

Leadership attends and asks clarifying questions

Customer

Implementation decisions

You own what to implement and when

Customer

Common Questions

Before you ask — we've answered it.

Start with clarity. Then decide.

Before you commit budget to AI tools, pilots, or infrastructure, find out whether the foundation exists to make them work. The assessment delivers the answer.

Fixed scope. Fixed price after discovery. Output is a roadmap, not implementation.