This post is part 4 in a series of posts about taking your AI coding agents to the next level.
We’ve now covered instructions files, skills files, and MCP servers in our quest to make our AI coding agents more effective. While these pieces are useful and important parts of making our agents more effective, they still leave developers manually orchestrating what the agent does: One prompt => one task => repeat. Let’s delve into some ways to start putting it all together.
In this post we’ll walk through ways to put those three layers together into end-to-end, multi-step workflows that run with minimal hand-holding. Agent-driven workflows allow you to take that concept of one prompt leading to one task and flip it around. With agent-driven workflows, you can describe an outcome and have the agent work through all the steps to reach that outcome.
Overview
So, what is an agent-driven workflow? It’s a workflow where the agent plans, executes, and verifies a multi-step process or task autonomously. It’s different from a basic “prompt-response” workflow where you, as the prompter, tell the agent what to do at each step of the process. Instead, with agent-driven workflows, the agent determines and follows the steps through the process and circles back to you when it has completed the entire flow. And when things don’t go as expected, the agent adapts and alters the workflow mid-step.
The keys to an agent-driven workflow:
- It’s goal-oriented. You describe the final output, not the steps.
- It’s a multi-step process, with the agent determining and following each step in turn.
- It’s self-verifying. The agent will verify that the work it has completed accomplishes the desired goal.
- It uses the available tools, like skills, instructions, and MCP servers.
| Layer | Role |
|---|---|
| Instructions | Defines the rules the agent must follow throughout the workflow |
| Skills | Provides the playbook for each step in the workflow |
| MCP Servers | Gives the agent the tools to execute and verify each step throughout the workflow |
All three layers are important. An agent that doesn’t have the proper instructions, skills, and tools doesn’t have all the pieces it needs to be able to drive the workflow itself.
Example 1 - Feature Implementation End-to-End
Let’s say we receive a ticket in our DevOps project that we want to implement via an agent-driven workflow. We can simply provide the prompt:
Implement PBI 4821
The agent workflow might define and work through the steps as follows:
- Fetches the PBI details from DevOps (MCP).
- Reads the acceptance criteria, details, comments, and attachments to get an understanding of the requirements.
- Breaks the feature into a list of tasks it believes are necessary to implement the feature.
- Creates a new feature branch in the repo (MCP).
- Scaffolds any new modules needed (skill).
- Implements handlers, endpoints, and validators (skill).
- Writes unit tests.
- Runs
dotnet buildanddotnet testto self-verify (MCP). - Commits and opens a PR with a description derived from the PBI and work done (MCP).
Instructions File
So, for our agent to be able to work through this flow, what might our instructions file look like to guide our AI coding agent through this process?
# Project Instructions for Claude
## Solution Structure
- `src/` — application code organized by module (e.g., `Orders`, `Catalog`, `Payments`)
- `tests/` — mirrors `src/` structure; unit tests live next to the module they test
- `docs/` — architecture decision records (ADRs) and diagrams
## Architecture
- This is a modular monolith. Module boundaries are enforced — do not reference one
module's internals from another. Use shared kernel types in `src/SharedKernel/`.
- Domain logic belongs in the Domain layer. No EF Core, no HTTP clients, no logging
in domain classes.
- Application layer orchestrates use cases via command/query handlers (MediatR).
## Coding Conventions
- Use `Result<T>` (Ardalis.Result) for operation outcomes. Do not throw exceptions
for expected domain errors.
- Guard clauses go at the top of methods using `Ardalis.GuardClauses`.
- Prefer records for value objects. Implement `IAggregateRoot` on aggregate roots.
- Async all the way down. No `.Result` or `.Wait()` on tasks.
- Do not use `var` when the type is not obvious from the right-hand side.
## Preferred Packages
- `Ardalis.Result` — operation results
- `Ardalis.GuardClauses` — input validation
- `FastEndpoints` — API endpoints (not MVC controllers)
- `xUnit` + `FluentAssertions` + `NSubstitute` — testing
- Do not add NuGet packages not on this list without asking first.
## Testing
- Every use case handler must have unit tests.
- Use Arrange / Act / Assert with a blank line between each section.
- Name tests: `MethodName_Condition_ExpectedResult`
- Integration tests use `WebApplicationFactory` and live in `tests/Integration/`.
## Build & Verify
- Build: `dotnet build`
- Test: `dotnet test`
- Always run both before marking any task complete. Do not proceed if either fails.
## Never Do These
- Do not push directly to `main` under any circumstances.
- Do not reference one module's internals from another module.
- Do not leave `// TODO` comments in committed code.
- Do not include secrets, connection strings, or API keys in any file.
---
## Skills
Before scaffolding a new module, read `.claude/skills/scaffold-module/SKILL.md`.
Before adding a new endpoint, read `.claude/skills/add-fastendpoints-endpoint/SKILL.md`.
Before writing a domain event and handler, read `.claude/skills/write-domain-event/SKILL.md`.
---
## Available Tools
### GitHub
- Use the GitHub MCP tool to create feature branches before making any code changes.
- Branch naming: `feature/<short-description>` (e.g., `feature/add-payments-module`).
- Always target `main` as the base branch unless told otherwise.
- Open PRs as drafts. Never open a ready-for-review PR without being asked.
- Include the work item number in the PR title and description.
### Azure DevOps
- Use the Azure DevOps MCP tool to fetch work item details when given a PBI or task number.
- Treat the acceptance criteria as the specification for the implementation.
- Do not mark a work item as done — only the team does that.
### NuGet
- Use the internal-nuget MCP tool to search for packages before adding any NuGet dependency.
- Do not add packages from nuget.org that have an internal equivalent.
### Database
- Use the database MCP tool to inspect the current schema before writing EF Core
migrations or making model changes.
- Only connect to the dev database. Never query staging or production.
---
## Workflows
### Feature Implementation
When given a PBI or task number, follow this sequence without deviation:
1. Fetch the work item from Azure DevOps and read the full description,
acceptance criteria, and any attachments.
2. Summarize what you understand the task to be and confirm before writing any code.
3. Create a feature branch via GitHub MCP.
4. If a new module is needed, read the scaffold-module skill before proceeding.
5. Implement the feature, following the coding conventions and architecture rules above.
6. Write unit tests covering all acceptance criteria and all `Result` outcomes.
7. Run `dotnet build` — fix any errors before continuing.
8. Run `dotnet test` — fix any failures before continuing.
9. Commit changes with a message referencing the work item number.
10. Open a draft PR via GitHub MCP with a description that includes:
- The PBI number and title
- A summary of what was implemented
- Any decisions made or assumptions taken
11. Stop and summarize what was completed. Do not begin the next task automatically.
There are a couple of instructions in there that would be beneficial to call out. First, step 2 calls on the agent to confirm its understanding of the task before doing any coding. It’s a safeguard put in place so that you can see that the agent understands the task at hand before it spends any cycles doing work. Second, step 11 directs the agent to stop when it completes the task at hand. This prevents any chaining into additional PBIs or work without direction, which can be a common cause of runaway behavior in AI coding agents.
Example 2 - Automated PR Review
This example demonstrates an AI coding agent as a code reviewer. We give it the following prompt:
Review PR #142
The agent workflow might look like this:
- Agent fetches the PR diff via GitHub (MCP).
- Agent checks changes against directives in its instructions file(s).
- If database migrations are present, the agent queries the database schema to validate the changes (MCP).
- The agent flags violations of coding standards, identifies missing tests, and reports any architectural boundary issues.
- The agent posts its comments on the PR via GitHub (MCP).
Example 3 - Dependency Audit
This example demonstrates how an agent can use a workflow to carry out maintenance tasks.
Audit our NuGet package dependencies and flag anything that needs attention
The workflow:
- Agent scans the solution for all NuGet package references.
- It queries the internal NuGet MCP for approved versions.
- It checks for packages that have been explicitly banned per the instructions file(s).
- Agent generates a report of outdated and banned packages, as well as approved alternatives.
- The agent generates PBIs to address each flagged item (MCP).
Creating Reliable Workflows
There are a few steps you can take to ensure that the workflows your agents run don’t go completely off the rails.
- Be explicit about the exit conditions - Clearly define for the agent when it should be stopping and that it needs to check in when it reaches that point.
- Verification steps are non-negotiable - For .NET projects, that means every workflow should at the very least end with a successful
dotnet buildanddotnet test. For other projects, similar verification steps should be defined. - Scope matters - Keep the goals narrow and well-defined. This produces far better results than open-ended ones.
- Checkpoints - For long-running workflows, provide details to the agent on checkpoints where it should summarize progress along the way.
- Fallback instructions - Provide instructions on what the agent should do if a step fails.
Instructions Files
To better guide an AI coding agent through a workflow, an important addition to your instructions files is a dedicated ## Workflows section. This section is separate and distinct from the tools and skills sections. This is the place to define workflows that are key to your specific project, environment, and requirements. Make sure you don’t duplicate common workflows that are defined elsewhere, such as in skills files.
Here are a couple of snippets from instructions files showing a minimal Workflows section.
Example Workflows Section 1
## Workflows
### Feature Implementation
When given a PBI number, follow this sequence:
1. Fetch the PBI from Azure DevOps
2. Read the scaffold-module skill if a new module is needed
3. Implement, test, and verify before opening a PR
4. Never push directly to main
### PR Review
When asked to review a PR:
1. Fetch the diff via GitHub MCP
2. Check against the conventions in this file
3. Post findings as review comments — do not approve automatically
Example Workflows Section 2
## Workflows
### Bug Fix
When given a bug report, PBI, or GitHub issue number, follow this sequence:
1. Fetch the work item and read the full description, repro steps, and any
attached logs or screenshots.
2. Identify the affected module(s) and locate the relevant code.
3. Write a failing unit test that reproduces the bug before changing any code.
4. Fix the bug so the new test passes without breaking existing tests.
5. Run `dotnet build` and `dotnet test` — fix any failures before continuing.
6. Commit with a message referencing the issue number and a brief description
of the root cause.
7. Open a draft PR via GitHub MCP. Include in the description:
- The issue number and title
- Root cause summary
- How the fix was verified
8. Stop and summarize. Do not move to the next issue automatically.
### Hotfix
When asked to apply a hotfix to a release branch:
1. Confirm the target branch before creating any branch or writing any code.
2. Create a hotfix branch from the release branch, not from main.
3. Apply the minimal change needed — do not refactor or clean up surrounding code.
4. Run `dotnet build` and `dotnet test`.
5. Commit and open a PR targeting the release branch, not main.
6. Note in the PR description that a separate PR to merge forward to main will
be needed — do not open that PR automatically.
Example Workflows Section 3
## Workflows
### Add or Modify an Entity
When asked to add a new entity or modify an existing one:
1. Use the database MCP tool to inspect the current schema for the affected
module before writing any code.
2. Make the domain model changes first (entity, value objects, configuration).
3. Update the EF Core `IEntityTypeConfiguration` for the affected entity.
4. Generate the migration:
dotnet ef migrations add <MigrationName>
--project src/{Module}
--startup-project src/Api
5. Review the generated migration — confirm no unexpected column drops or
renames before proceeding.
6. Run `dotnet build` and `dotnet test`.
7. Commit the model changes and migration together in a single commit.
8. Do not run `dotnet ef database update` — leave that to the deployment pipeline.
### Remove or Rename a Column
When asked to remove or rename a column:
1. Check the database MCP tool to confirm the column exists and is not a
foreign key target in another table.
2. If data needs to be migrated, add the migration logic explicitly — do not
rely on EF Core to infer it.
3. Search the codebase for all references to the old column name before
generating the migration.
4. Follow the Add or Modify an Entity workflow from step 4 onward.
5. Flag in the PR description that this is a destructive migration and requires
a deployment window review.
Improving Your Workflows
The most critical piece of agentic workflows, as with anything involving AI coding agents, is to monitor and assess what the agent does, how well it does it, and where it goes wrong. Like everything else we’ve discussed up to this point, workflows should be treated as living, breathing documents. Review them regularly and update them where improvements can be made. Watch for agent failures and determine how to update the workflow guidance to avoid similar failures in the future.
Look for ways to optimize token usage by being clear and concise in your wording. Stick to short, technically precise language and avoid fluff. Lastly, avoid micromanaging your agent. Provide just enough detail to outline the steps, but be flexible enough to allow the agent to figure out the rest based on its other skills and knowledge.
Conclusion
With all four of these layers in place, you’re ready to make your AI coding agents take a big step up to the next level. With instructions, skills, MCP servers, and workflows, your agents will take you a lot farther than basic prompting alone can accomplish.
Where do we go from here? How about groups of AI coding agents working together on a task? In the final post of this series, we’ll be looking at specializing a coding agent and putting together teams of specialized agents to work together on a task.

