How AI works at Zent — and what humans always control.
AI runs inside Zent's operations. It monitors infrastructure, detects threats, assists procurement, and generates intelligence. This page explains exactly what it does, what boundaries it operates within, and how every action is auditable.
If you want the full technical addendum — data handling, compliance alignment, and guardrail implementation — request it with your Strategic Briefing.
AI Roles
Five named roles. Defined boundaries for each.
Zent's AI operates through five distinct roles — each with a specific function, defined scope, and explicit constraints on what it can and cannot do.
Sentinel
Continuous monitoring and anomaly detection
Continuously models operational baselines across network, endpoint, identity, and cloud layers. Surfaces deviations before they become incidents. Distinguishes normal variation from real threats — reducing alert noise without hiding real signals.
Boundaries
- →Detects and flags — does not block or isolate without human confirmation
- →Baselines are per-environment — no cross-customer data sharing
Orchestrator
Routine automation and playbook execution
Automates documented, repeatable operational tasks — patch scheduling, diagnostic triage, alert routing, and playbook execution. Handles the operational overhead so engineers focus on exceptions.
Boundaries
- →Executes pre-approved playbooks only — cannot author new ones autonomously
- →All automated actions are logged with timestamps and rationale
Planner
Forecasting, capacity modeling, and roadmaps
Models capacity trends, migration impact, hardware lifecycle, and procurement demand. Produces prioritized, low-risk operational plans and refresh recommendations based on actual environment data.
Boundaries
- →Produces recommendations — procurement and deployment require human approval
- →Financial projections are clearly marked as estimates
Assistant
Contextual intelligence and summaries
Generates context-rich remediation drafts, incident summaries, and handoff documentation. Surfaces relevant customer context during support interactions — reducing investigation time and improving handoff quality.
Boundaries
- →Assists human operators — never takes action on behalf of a customer without consent
- →Context is scoped per customer — no cross-account inference
Guarded
Auditability and human approval gates
Every AI output is logged, explainable, and reversible. High-impact actions — configuration changes, procurement approvals, access policy updates — require explicit human confirmation before execution. No AI action affecting a customer is irreversible without a documented approval.
Boundaries
- →Human approval required for all customer-impacting changes
- →Full audit trail on every AI decision — available to customers on request
Division of Responsibility
What AI does. What humans do.
Every area where AI is active has a clearly defined handoff point. This is the full list.
| Area | AI does | Human does |
|---|---|---|
| Infrastructure monitoring | Learns baselines, flags anomalies, suppresses noise | Investigates and approves remediation |
| Threat detection | Correlates signals, prioritizes real threats, investigates automatically | Confirms containment, approves access changes |
| Quote generation | Analyzes line items, suggests margins, explains reasoning | Reviews and approves before any quote is sent |
| Asset lifecycle | Tracks age, flags refresh candidates, forecasts refresh cost | Approves procurement and refresh plans |
| Renewal intelligence | Surfaces renewals with benchmark pricing and consolidation opportunities | Decides whether to renew, negotiate, or cancel |
| Compliance evidence | Collects evidence continuously, flags control drift | Reviews and approves audit submissions |
| Patch management | Schedules patches in predicted low-impact windows | Approves patch windows and rollback plans |
| Tax certificate extraction | Extracts fields from uploaded certificates with confidence scores | Validates and approves extracted data |
Operating Principles
Five principles. All non-negotiable.
AI advises. Humans decide.
No AI model at Zent takes customer-impacting action autonomously. Every recommendation surfaces to a human operator or customer before execution. This is architectural, not aspirational.
Every action is logged and explainable.
All AI outputs include a rationale. All automated actions generate an audit trail with timestamp, model input context, output, and the human who approved it. Customers can request this log at any time.
Data is scoped. No cross-customer inference.
Customer data used for AI context is isolated per organization. No customer's operational data is used to train models or improve outputs for another customer. Context windows are constructed and destroyed per request.
Reversibility is required.
Every automated action Zent deploys includes a documented rollback path. If an AI-recommended change causes an issue, reversal is immediate and does not require re-approval.
Transparent about uncertainty.
AI outputs include confidence signals. Low-confidence recommendations are clearly marked. We don't hide model uncertainty — we surface it so humans can weigh it appropriately.
AI Practices Addendum
Need the technical details?
The AI practices addendum is available to customers and qualified prospects after a Strategic Briefing. It covers data handling, retention policies, compliance alignment, and guardrail implementation — under NDA where required.
Request via Strategic BriefingAvailable to customers and qualified prospects. Not published publicly.