Skip to main content

Kiro Agents Are Upping the Game for AI-Assisted Engineering

Keith Hodo
Author
Keith Hodo
Solutions Architect at AWS. Writing about cloud, agentic AI, and the journey.

In my previous post I walked through the Kiro Skills I’ve built for spec writing, implementation, and session management. Skills are the workflows, the step-by-step instructions that tell Kiro how to approach a type of work. But Skills are only half the picture.

The other half is Agents.

Skills vs Agents
#

A Skill is a workflow. It describes a process: gather requirements, write code, run a review, hand off context. A Kiro Agent is a persona. It defines who is doing the work: what tools they have access to, what context they start with, what permissions they’re granted, and how they think about problems.

Skills tell Kiro what to do. Agents tell Kiro how to be.

In practice, the two work together. My implement-and-review-loop skill orchestrates the development cycle, but when it reaches the review phase, it hands off to specialized agents, each one focused on a different aspect of code quality. The Skill is the conductor. The Agents are the musicians.

What Changed
#

When I first built my review system, the Agent definitions were simple JSON files with inline prompts. They worked, but the prompts were getting long and hard to maintain. I also wasn’t taking advantage of several agent capabilities that Kiro supports.

I recently went through the Kiro agent configuration reference and updated all my Agents to use the current spec. The big changes:

Prompts moved to separate files. Instead of cramming a multi-paragraph prompt into a JSON string with escaped newlines, each Agent now references an external markdown file via file://. The JSON stays clean and the prompts are easy to read and edit.

Resources for automatic context. Agents can now declare resources, files that get loaded into context when the Agent starts. My review Agents all load the project’s structure and tech stack docs so they understand the codebase before they read a single line of changed code.

Hooks for lifecycle automation. The hooks field lets you run commands at specific trigger points. My code review orchestrator runs git diff --name-only on spawn so it immediately knows what files changed.

Welcome messages. Small thing, but when you’re swapping between agents during a session, seeing “πŸ”’ Security review agent ready. What should I audit?” is a nice confirmation that you’re talking to the right persona.

Allowed tools for security. The allowedTools field controls which tools an agent can use without prompting. My review agents get read-only access: fs_read, code, grep, glob. They can inspect the codebase but can’t modify it. The orchestrator gets use_subagent so it can spawn the specialists.

The Code Review Orchestrator
#

This is the agent I’m most excited about. Instead of running five separate reviews manually, I have a code-reviewer agent that orchestrates the entire process. It identifies changed files, spawns five specialist subagents in parallel, collects their findings, deduplicates, assigns final severities, and produces a consolidated report.

Here’s the full agent definition:

{
  "name": "code-reviewer",
  "description": "Code review orchestrator that delegates to
    specialized security, performance, maintainability,
    infrastructure, and test quality reviewers",
  "prompt": "file://./prompts/code-reviewer.md",
  "tools": [
    "fs_read",
    "fs_write",
    "code",
    "grep",
    "glob",
    "execute_bash",
    "use_subagent"
  ],
  "allowedTools": [
    "fs_read",
    "code",
    "grep",
    "glob",
    "use_subagent"
  ],
  "resources": [
    "file://.kiro/steering/structure.md",
    "file://.kiro/steering/tech.md"
  ],
  "hooks": {
    "agentSpawn": [
      { "command": "git diff --name-only" }
    ]
  },
  "welcomeMessage": "πŸ“‹ Code review orchestrator ready.
    I'll coordinate all 5 specialist reviewers."
}

And the orchestrator prompt that lives in prompts/code-reviewer.md:

You are a code review orchestrator.

Your workflow:

1. Identify the changed files using `git diff --name-only`
   and `git ls-files --others --exclude-standard`.
2. Spawn five specialized subagent reviews IN PARALLEL:
   - review-security
   - review-performance
   - review-maintainability
   - review-infrastructure
   - review-test-quality
   Pass each subagent the list of files to review and
   relevant project context.
3. Synthesize all five reviews into a single consolidated
   report.
4. Save the consolidated report to
   reviews/review-{DATE}-{DESCRIPTION}.md.
5. Present a summary to the user.

When synthesizing:
- Deduplicate findings that appear in multiple reviews
- Assign a final severity to each unique finding:
  - πŸ”΄ Must Fix: bugs, security vulnerabilities,
    resource leaks, correctness issues
  - 🟑 Should Fix: performance concerns,
    maintainability issues, missing patterns
  - 🟒 Nit: style, naming, minor suggestions
- Group findings by file, not by reviewer
- Credit which reviewer(s) flagged each issue
- End with a summary table: counts by severity,
  overall verdict (ready to merge or not)

Be direct and specific. Reference file names and line
numbers. Don't rubber-stamp.

The key design decision here is separation of concerns. The orchestrator doesn’t know anything about security or performance or testing. It knows how to coordinate, deduplicate, and synthesize. Each specialist agent knows its domain deeply. When I want to improve how security reviews work, I edit one prompt file. The orchestrator doesn’t change.

The PR Writer
#

The other agent I use constantly is the pr-writer. After implementing and reviewing code, I need a pull request description. This agent reads the PR template, the commit history, and the diff, then fills out every section with specific information from the actual changes.

{
  "name": "pr-writer",
  "description": "Generates pull request descriptions from
    git history using the project's PR template",
  "prompt": "file://./prompts/pr-writer.md",
  "tools": [
    "fs_read",
    "execute_bash",
    "grep",
    "glob"
  ],
  "allowedTools": [
    "fs_read",
    "grep",
    "glob"
  ],
  "resources": [
    "file://.github/PULL_REQUEST_TEMPLATE.md"
  ],
  "welcomeMessage": "πŸ“ PR writer ready. I'll generate a
    description from your branch history."
}

With the prompt:

You write pull request descriptions. Given a branch's
commit history and diff summary, you produce a filled-out
PR description using the project's PR template.

Your workflow:
1. Read the PR template from
   .github/PULL_REQUEST_TEMPLATE.md.
2. Run `git log main..HEAD --oneline` to get the commit
   history on this branch.
3. Run `git diff main --stat` to get a summary of
   changed files.
4. Read commit messages for detail.
5. Fill out every section of the PR template with
   specific, accurate information from the commits
   and diff.
6. For checkboxes, mark them [x] where you can confirm
   from the code/commits, leave [ ] where you can't
   verify.
7. Output the filled PR as markdown directly in the
   chat. Do NOT create a file.

Be thorough but concise. Reference specific files and
changes. Don't be generic.

Notice the resources field. It loads the PR template at startup so the agent already knows the format before you ask it anything. The allowedTools are read-only. It can inspect the repo but can’t modify it.

The Anatomy of a Review Agent
#

For the specialist review agents, the pattern is consistent. Each one gets the same tools and resources but a different prompt focused on its domain. Here’s the infrastructure reviewer as an example:

{
  "name": "review-infrastructure",
  "description": "AWS and infrastructure-focused code
    reviewer",
  "prompt": "file://./prompts/review-infrastructure.md",
  "tools": [
    "fs_read",
    "code",
    "grep",
    "glob"
  ],
  "allowedTools": [
    "fs_read",
    "code",
    "grep",
    "glob"
  ],
  "resources": [
    "file://.kiro/steering/structure.md",
    "file://.kiro/steering/tech.md"
  ],
  "welcomeMessage": "πŸ—οΈ Infrastructure review agent ready.
    What should I inspect?"
}

With the prompt in prompts/review-infrastructure.md:

You are an AWS infrastructure code reviewer.

Focus exclusively on:
- CDK patterns: Cross-stack coupling via CloudFormation
  exports vs SSM parameters? Correct use of RemovalPolicy?
  Stack dependency ordering?
- IAM: Least-privilege policies? Overly broad wildcards
  in actions or resources? Missing condition keys?
- Encryption: S3 encryption enabled? KMS keys where
  needed? SSL/TLS enforced?
- Networking: Security groups too permissive? Public
  access where it shouldn't be?
- Cost: Over-provisioned resources? Missing lifecycle
  rules? Inefficient storage classes?
- Monitoring: Missing CloudWatch alarms or metrics?
  No logging configured?
- Resilience: Single points of failure? Missing multi-AZ?
  No backup/retention policies?
- Tagging: Resources missing required tags for cost
  allocation or ownership?

For each finding:
- Explain the operational risk
- Rate severity: πŸ”΄ Critical / 🟑 Medium / 🟒 Low
- Suggest a specific fix

Think about what breaks at 3 AM when nobody is watching.

All five specialist agents follow this same structure. The only thing that changes is the prompt. Security thinks about attackers. Performance thinks about 100x load. Maintainability thinks about the developer six months from now. Test quality assumes every untested path will break in production.

What This Looks Like in Practice
#

I’ve been running this setup on a project where I’m building an AI-powered chat interface for a raffle administration site. The site supports the Cascadian Gamers annual raffle to raise money for Extra Life, a charity that supports Children’s Miracle Network Hospitals.

Using Kiro skills and agents together, I was able to add the AI chat feature in a weekend. In another weekend, I completely rewrote the frontend. The five-agent review caught issues I would have missed in a manual pass: an overly broad IAM policy, a missing error handler on an async call, a test that was asserting the wrong thing.

The outcome is that we’ll have an agentic chat that can interact with our raffle data. And who knows, we might even draw winners using the AI chat this year. We’re in exciting times.

The File Structure
#

Here’s how agents are organized in the repo:

.kiro/
  agents/
    code-reviewer.json
    pr-writer.json
    review-security.json
    review-infrastructure.json
    review-maintainability.json
    review-performance.json
    review-test-quality.json
    prompts/
      code-reviewer.md
      pr-writer.md
      review-security.md
      review-infrastructure.md
      review-maintainability.md
      review-performance.md
      review-test-quality.md

The JSON files are configuration. The markdown files are personality. Keeping them separate means I can iterate on an agent’s behavior without touching its permissions or tooling, and vice versa.

Getting Started with Agents
#

If you already have Kiro skills, adding agents is the natural next step. Start with one. Pick a task you do repeatedly (code review, PR writing, documentation) and create an agent for it.

The Kiro agent docs walk through creation with /agent create. You can also create them manually. They’re just JSON files in .kiro/agents/.

A few things I’ve learned:

Start with allowedTools restrictive and expand as needed. Read-only agents are safer and still incredibly useful.

Use file:// for prompts from day one. You’ll thank yourself when the prompt is 40 lines long and you need to edit it.

The resources field is underrated. Loading project context at startup means the agent doesn’t waste time asking “what framework is this?” or “where are the tests?”

Agents and skills are better together. Skills define the workflow. Agents define the expertise. The combination is more than the sum of its parts.

Keith