99x AI Agents · from SDLC to AI-DLC

Humans and AI,
working at 99x speed

The SDLC is becoming the AI-DLC: an AI-Driven Lifecycle where agents in a coordinated mesh collaborate seamlessly, so your engineers can focus on work that only humans can do.

Get Started Meet the Agents
GitHub & Azure DevOps · Claude-powered · Docker-ready · Open Source

Rethinking the workflow

Adding AI to an unchanged delivery process often creates more noise than value. Better results come from shorter feedback loops, clearer handoffs, and explicit human accountability.

Retrofitting AI into Legacy Agile

The Dead End

Retrofitting AI into legacy Agile

Using AI inside the same slow routines, siloed roles, and meeting-heavy delivery model rarely changes outcomes. It may speed up individual tasks, but it does not improve how the whole system works.

Redesigning the Machine for AI Velocity

A better way to work

Redesigning the machine for AI velocity

A more useful approach is to connect AI directly to delivery tools, keep feedback tight, and place people at the decision points that matter. That makes AI support practical without making the process opaque.

From traditional Agile to AI-DLC

AI-DLC reframes the lifecycle around shorter feedback loops and agent support. AI can initiate routine work, but people still set goals, review trade-offs, and take responsibility for outcomes. For the full story, see the AI-Driven Development Lifecycle whitepaper from AWS.

AI-DLC Architecture Transformation

Human judgment as the control point

In AI-DLC, people do not disappear from the process. Their role becomes clearer: set direction, add context, and decide when AI output is good enough to move forward.

AI suggests
Human decides
Detailed work plan

AI breaks an intent into tasks and sub-tasks that can be reviewed and adjusted.

Adds context

Removes work that does not support the goal and adds domain knowledge the AI cannot infer on its own.

Architectural options

AI outlines design trade-offs such as sync vs. async flows, storage choices, or service boundaries.

Reviews key decisions

Approves or redirects based on risk, cost, constraints, and team knowledge that is not written down.

Step-by-step directions

AI keeps producing implementation steps, test ideas, and next actions as work progresses.

Keeps work on course

Sets the destination and corrects direction when the AI starts solving the wrong problem.

A day in the life

One example of how work can move from backlog to production when agents handle routine analysis and humans keep the important decisions.

Human
AI Agent
Morning
Product Owner

Creates backlog item: "As a user, I want to export reports as PDF."

Minutes later
Requirement Analyst

Cross-references against the existing reporting module. Identifies that CSV is the only current export format. Sends precise questions to the PO about page layout preferences and chart embedding.

Mid-morning
Product Owner

Answers the questions with layout specs, chart preferences, and confirms a PDF rendering library will be needed.

Mid-morning
Requirement Analyst

Updates the item with full acceptance criteria, edge cases (empty reports, large datasets, special characters), and dependency notes. Marks the item as groomed.

Sprint planning
Engineering Team

Pulls the groomed item into the sprint and assigns it to a developer.

Sprint planning
Technical Designer

Produces a spec: extend IReportExporter, add a PdfReportExporter implementation, configure page size, update the export endpoint. Flags the PDF library choice for tech lead review.

Afternoon
Developer

Picks up the spec. Uses AI for boilerplate and repetitive setup work, while focusing on the rendering logic and edge cases that need closer attention. Opens a PR.

End of day
PR Reviewer

Architecture checks pass. Flags one gap: the acceptance criterion for special characters in report titles isn't covered by the implementation.

End of day
Developer

Adds special character handling and pushes the fix. PR is approved.

Next morning
Test Planner

Generates targeted test cases for the highest-risk areas: large dataset rendering, special character handling, and the new exporter interacting with the existing CSV path.

Next morning
QA Engineer

Runs the targeted tests. Finds a layout issue with wide tables and logs it for the developer to fix.

Merge
Doc Maintainer

Code is merged. Updates the API docs with the new export endpoint and adds a component doc section for the PDF exporter.

The agent team

Five specialized agents support the delivery flow, each with a clear trigger, a defined output, a simple definition of done, and an explicit human checkpoint. The open-source repo already exposes the moving parts for the shipped flows: inspectable prompts in the plugin folders, runtime wiring in the agent architecture docs, and container setup in the Docker deployment guide.

These agents are loosely coupled in implementation, but they operate as a team by passing artifacts forward and stopping at human checkpoints. One agent's output becomes the next agent's input, and people can approve, edit, or redirect the handoff at each stage.

Requirement Analyst Groomed backlog item
Technical Designer Technical design spec
PR Reviewer Categorized findings
Test Planner Prioritized test plan
Release Manager Release-ready draft

Shared artifacts, not tight runtime coupling, are what make this a team: backlog context flows into design, design and implementation flow into review, review feeds testing, and approved change signals feed release preparation.

Backlog · Grooming

Requirement Analyst

Trigger: A new backlog item (story, task, bug) is created.

Clarifies backlog items into structured requirements with acceptance criteria, edge cases, and targeted questions.

Show details
Trigger A new backlog item (story, task, bug) is created.
  • Cross-references the item against existing product, architecture, and related stories
  • Expands terse descriptions into structured requirements with acceptance criteria and edge cases
  • Flags ambiguities, contradictions, and under-specified criteria
  • Sends targeted questions to the item creator instead of generic requests for clarification
  • Iteratively refines until the item meets a defined "groomed" threshold
Output A clarified backlog item with structured acceptance criteria, identified risks, and resolved open questions
Done when The item has clear acceptance criteria, key edge cases, explicit open questions or assumptions, and is ready for planning without another round of generic clarification
Human touchpoint Product owner reviews the elaborated item and decides when it's ready for sprint planning
Design · Specification

Technical Designer

Trigger: A groomed backlog item is moved into a sprint or assigned to a developer.

Turns a ready item into an implementation spec with affected components, interfaces, and testing expectations.

Show details
Trigger A groomed backlog item is moved into a sprint or assigned to a developer.
  • Reviews requirements against the current system architecture, code structure, and conventions
  • Produces a technical design doc: affected components, data model changes, API contracts, integration points
  • Enforces established patterns (layering, dependency direction, naming) and flags deviations
  • Provides implementation guidance: which files to modify, what interfaces to implement, what tests to write
Output Technical design spec aligned with the architecture, ready for a developer to pick up
Done when A developer can start implementation with affected components, interfaces, constraints, and expected tests called out, with major trade-offs surfaced for review
Human touchpoint Developer (and optionally a senior engineer) reviews the design; they own the final approach
Code Review

PR Reviewer

Trigger: A pull request is created or updated.

Checks the change against requirements, architecture, code quality, and risk before human sign-off.

Show details
Trigger A pull request is created or updated.
  • Verifies implementation satisfies acceptance criteria from the original requirements
  • Checks adherence to architecture: layering rules, dependency direction, naming conventions
  • Evaluates readability, complexity, error handling, and coding standards
  • Flags high-risk changes (security-sensitive code, data migrations, public API changes)
  • Posts structured review comments with clear pass/fail per check and actionable recommendations
Output PR review with categorized findings (architecture, functional, quality, risk) as comments or status checks
Done when The PR has been checked against requirements and standards, findings are categorized by severity, and any blocking risks are explicit enough for the author to act on
Human touchpoint Developer addresses feedback; human reviewer does final sign-off, especially for high-risk changes
Quality Assurance

Test Planner

Trigger: A pull request is created or updated and has passed initial automated checks or received PR Reviewer feedback.

Prioritizes the highest-risk test scenarios and highlights gaps in automated coverage before merge.

Show details
Trigger A pull request is created or updated and has passed initial automated checks or received PR Reviewer feedback.
  • Identifies highest-risk areas that need manual testing - focuses effort where it matters most
  • Maps existing functionality that could regress and recommends targeted regression tests
  • Produces concrete, executable test scenarios: happy paths, edge cases, error conditions, integration boundaries
  • Compares change scope against existing automated test coverage and highlights gaps
Output Prioritized manual test cases for highest-risk areas, plus recommendations for new automated tests
Done when The highest-risk scenarios are prioritized, likely regressions are mapped, and test gaps are visible early enough to influence the PR before merge
Human touchpoint QA engineers execute the test cases and apply judgment about additional exploratory testing
Release

Release Manager Agent

Trigger: Approved changes are ready to be prepared for release.

Prepares release-ready notes and readiness signals, while calling out breaking changes and rollout concerns.

Show details
Trigger Approved changes are ready to be prepared for release.
  • Compiles release notes from merged PRs and groups them into useful categories
  • Checks release readiness across completed work, outstanding blockers, and known risks
  • Flags breaking changes and highlights rollout concerns that need explicit review
Output Draft release notes, readiness checks, and a categorized changelog for human approval
Done when Release notes are draftable from the change set, breaking changes and rollout risks are called out, and the release owner has enough context to approve timing and communication
Human touchpoint A release owner reviews the draft, confirms timing, and approves the final release communication

Extending the team

The core flow can be extended with more specialized agents where teams see a real need. The same pattern applies each time: define the prompt, connect the right tools and trigger, and keep human review at the decision points that carry product, technical, or release risk.

Sprint Planning Agent

When a groomed backlog item moves into a sprint, reviews load, dependencies, and sequencing constraints. Produces a sprint risk report and capacity recommendations; the team still makes the commitment.

Sprint Planning

Backlog Discovery Agent

Reviews escalations, Jira and repository issues, and technical debt signals to surface candidate backlog items and gaps for the team to triage.

Backlog

Bug Fix Agent

Analyses bug reports against recent changes, commit history, and code paths. Produces a structured root cause investigation with a suggested fix approach.

Maintenance

Code Health Agent

Identifies code smells, dead code, and duplicated logic. Tracks technical debt over time and proposes focused refactoring items.

Maintenance

Security Review Agent

Scans for injection risks, insecure deserialization, hardcoded secrets, and missing input validation. Checks dependency trees for known CVEs.

Build & Review

Performance Sentinel

Catches N+1 queries, unbounded loops, missing pagination, and blocking I/O on hot paths before they reach production.

Build & Review

Documentation Maintainer

Looks at PRs that are approved or ready for merge, suggests documentation updates while the change is still in review, and drafts affected README, API, or architecture docs before merge to avoid drift.

Documentation

Incident Response Agent

Correlates incidents with recent deployments, retrieves logs and metrics, suggests mitigations, and drafts post-mortem timelines.

Operations

Onboarding Agent

Generates an onboarding guide based on role, curates key docs and ADRs, and helps new team members navigate the codebase.

Team Growth

API Contract Agent

Detects breaking API changes before merge, validates against OpenAPI specs, checks backward compatibility, and enforces API design standards.

Build & Review

Compliance & Governance

Monitors changes for PII handling and common compliance implications, and helps maintain an evidence trail for audits.

All Phases

Migration Agent

Analyses migration scope, generates validated migration scripts, identifies all call sites, and produces a sequenced migration plan with rollback steps.

Maintenance

What success looks like

Less waiting at grooming, fewer rigid sprint handoffs, and agents that prompt the next step-together they optimise the whole path from intent to shipped change.

Faster requirement grooming

Shorter queues before work is ready to pull: agents help clarify and structure intent so teams spend less time blocked on grooming.

Not boxed into sprint-only timelines

Flow can follow value and readiness instead of calendar boundaries alone-continuous delivery without forcing every decision into a sprint fence.

Agent-prompted handovers

Role and phase transitions are facilitated and nudged by agents so context travels with the work and nothing important drops between people.

Agent-supported pull requests

PRs get drafting help, structured review input, and clearer change summaries so authors and reviewers move faster with less thrash.

Focused testing & documentation

Testing and documentation agents target what changed-targeted checks and drafts instead of generic boilerplate-so quality work stays proportional to the risk.

Simple per-agent pricing

An AI action is one agent run, such as a PR review or requirement analysis. You subscribe per agent (for example, a PR Review Agent). The same unit applies across the core agent set.

Per agent
500 NOK per month per agent
  • 20 AI actions included
  • Each action covers one lifecycle-triggered run, such as a review, analysis, or planning step
  • Additional actions billed at 20 NOK each
Enterprise

In-house enterprise setup

Run Xianix on your own infrastructure, with 99x helping to provision, integrate, and maintain the environment.

  • Tailored deployment and integration with your stack, policies, and lifecycle tooling
  • Ongoing maintenance, monitoring, and updates for your AI-supported delivery setup

What Xianix is not

Not full automation. Agents support the work, but people remain accountable for decisions and outcomes.
Not a replacement for skilled engineers. Good results still depend on people who understand the product, the codebase, and the trade-offs.
Not an opaque pipeline. Teams should be able to see what agents did, why they did it, and where human review happened.

Interested in an in-house setup?

Talk to 99x about enterprise setup, integration, and running Xianix Agent Team and Xians AI Agent Control Plane on your own infrastructure.

Talk to 99x