{"id":213228,"date":"2025-10-14T17:38:16","date_gmt":"2025-10-14T17:38:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/213228\/"},"modified":"2025-10-14T17:38:16","modified_gmt":"2025-10-14T17:38:16","slug":"how-to-build-reliable-ai-workflows-with-agentic-primitives-and-context-engineering","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/213228\/","title":{"rendered":"How to build reliable AI workflows with agentic primitives and context engineering"},"content":{"rendered":"<p>Many developers begin their AI explorations with a prompt. Perhaps you started the same way: You opened <a href=\"https:\/\/github.com\/features\/copilot?utm_source=blog-copilot-features-oct-2025&amp;utm_campaign=agentic-copilot-cli-launch-2025\" rel=\"nofollow noopener\" target=\"_blank\">GitHub Copilot<\/a>, started asking questions in natural language, and hoped for a usable output. This approach can work for simple fixes and code suggestions, but as your needs get more complex\u2014or as your work gets more collaborative\u2014you\u2019re going to need a more foolproof strategy.\u00a0<\/p>\n<p>This guide will introduce you to a three-part framework that transforms this ad-hoc style of AI experimentation into a repeatable and reliable engineering practice. At its core are two concepts: agentic primitives, which are reusable, configurable building blocks that enable AI agents to work systematically; and context engineering, which ensures your AI agents always focus on the right information. By familiarizing yourself with these concepts, you\u2019ll be able to build AI systems that can not only code independently, but do so reliably, predictably, and consistently.<\/p>\n<p><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" height=\"618\" width=\"1024\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/image1_6a15dc.png\" alt=\"An AI-native development framework, showing spec-driven development and agent workflows at the top, context engineering (including roles, rules, context, and memory) below, and prompt engineering (including role activation, context loading, tool invocation, and validation gates) at the base.\" class=\"wp-image-91407\"  \/>The AI-native development framework<br \/>Markdown prompt engineering + agent primitives + context engineering = reliability<\/p>\n<p>Whether you\u2019re new to AI-native development or looking to bring deeper reliability to your agent workflows, this guide will give you the foundation you need to build, scale, and share intelligent systems that learn and improve with every use.<\/p>\n<p>\ud83e\udde0 Try it yourself: Build and run agentic workflows with GitHub Copilot CLI<\/p>\n<p>Bring your agent primitives to life right from your terminal. The new GitHub Copilot CLI lets you run, debug, and automate AI workflows locally\u2014no setup scripts, no context loss. It connects directly to your repositories, pull requests, and issues through GitHub MCP, giving your agents the same context they\u2019d have in your IDE.\u00a0<\/p>\n<p>\ud83d\udc49 <a href=\"https:\/\/github.com\/github\/copilot-cli?utm_source=blog-source-cli-cta-oct-2025&amp;utm_campaign=agentic-copilot-cli-launch-2025\" rel=\"nofollow noopener\" target=\"_blank\">Get started with GitHub Copilot CLI &gt;<\/a><\/p>\n<p>What are agent primitives?\u00a0<\/p>\n<p>The three-layer framework below turns ad-hoc AI experimentation into a reliable, repeatable process. It does this by combining the structure of Markdown; the power of agent primitives, simple building blocks that give your AI agents clear instructions and capabilities; and smart context management, so your agents always get the right information (not just more information).\u00a0<\/p>\n<p>Layer 1: Use Markdown for more strategic prompt engineering<\/p>\n<p>We\u2019ve written about the <a href=\"https:\/\/github.blog\/ai-and-ml\/generative-ai\/prompt-engineering-guide-generative-ai-llms\/\" rel=\"nofollow noopener\" target=\"_blank\">importance of prompt engineering<\/a>. But here\u2019s what you need to know: The clearer, more precise, more context-rich your prompt, the better, more accurate your outcome. This is where Markdown comes in. With Markdown\u2019s structure (its headers, lists, and links), you can naturally guide AI\u2019s reasoning, making outputs more predictable and consistent.\u00a0<\/p>\n<p>To provide a strong foundation for your prompt engineering, try these techniques with Markdown as your guide:\u00a0<\/p>\n<p>Context loading: [Review existing patterns](.\/src\/patterns\/). In this case, links become context injection points that pull in relevant information, either from files or websites.<\/p>\n<p>Structured thinking: Use headers and bullets to create clear reasoning pathways for the AI to follow.<\/p>\n<p>Role activation: Use phrases like \u201cYou are an expert [in this role].\u201d This triggers specialized knowledge domains and will focus the AI\u2019s responses.<\/p>\n<p>Tool integration: Use MCP tool tool-name. This lets your AI agent run code in a controlled, repeatable, and <a href=\"https:\/\/github.blog\/ai-and-ml\/github-copilot\/meet-the-github-mcp-registry-the-fastest-way-to-discover-mcp-servers\/\" rel=\"nofollow noopener\" target=\"_blank\">predictable way on MCP servers<\/a>.<\/p>\n<p>Precise language: Eliminate ambiguity through specific instructions.<\/p>\n<p>Validation gates: \u201cStop and get user approval.\u201d Make sure there is always human oversight at critical decision points.<\/p>\n<p>For example, instead of saying, Find and fix the bug, use the following:<\/p>\n<p>You are an expert debugger, specialized in debugging complex programming issues.<\/p>\n<p>You are particularly great at debugging this project, which architecture and quirks can be consulted in the [architecture document](.\/docs\/architecture.md). <\/p>\n<p>Follow these steps:<\/p>\n<p>1. Review the [error logs](.\/logs\/error.log) and identify the root cause. <\/p>\n<p>2. Use the `azmcp-monitor-log-query` MCP tool to retrieve infrastructure logs from Azure.  <\/p>\n<p>3. Once you find the root cause, think about 3 potential solutions with trade-offs<\/p>\n<p>4. Present your root cause analysis and suggested solutions with trade-offs to the user and seek validation before proceeding with fixes &#8211; do not change any files.<\/p>\n<p>Once you\u2019re comfortable with structured prompting, you\u2019ll quickly realize that manually crafting perfect prompts for every task is unsustainable. (Who has the time?) This is where the second step comes in: turning your prompt engineering insights into reusable, configurable systems.<\/p>\n<p>Layer 2: Agentic primitives: Deploying your new prompt engineering techniques<\/p>\n<p>Now it\u2019s time to implement all of your new strategies more systematically, instead of prompting ad hoc. These configurable tools will help you do just that.<\/p>\n<p>Core agent primitives<\/p>\n<p>When it comes to AI-native development, a <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/concepts\/#core-primitives\" rel=\"nofollow noopener\" target=\"_blank\">core agent primitive<\/a> refers to a simple, reusable file or module that provides a specific capability or rule for an agent.\u00a0<\/p>\n<p>Here are some examples:<\/p>\n<p>Instructions files: Deploy structured guidance through modular .instructions.md files with targeted scope. At GitHub, we offer <a href=\"https:\/\/docs.github.com\/en\/enterprise-cloud@latest\/copilot\/how-tos\/configure-custom-instructions\/add-repository-instructions\" rel=\"nofollow noopener\" target=\"_blank\">custom instructions<\/a> to give Copilot repository-specific guidance and preferences.\u00a0<\/p>\n<p>Chat modes: Deploy role-based expertise through .chatmode.md files with MCP tool boundaries that prevent security breaches and cross-domain interference. For example, professional licenses that keep architects from building and engineers from planning.<\/p>\n<p>Agentic workflows: Deploy reusable prompts through .prompt.md files with built-in validation.<\/p>\n<p>Specification files: Create implementation-ready blueprints through .spec.md files that ensure repeatable results, whether the work is done by a person or by AI.<\/p>\n<p>Agent memory files: Preserve knowledge across sessions through .memory.md files.<\/p>\n<p>Context helper files: Optimize information retrieval through .context.md files.<\/p>\n<p>How using a core agent primitive can transform a prompt and its outcome<\/p>\n<p>Technique: Using Markdown prompt engineering, your prompt can be: \u201cImplement secure user authentication system\u201d\u00a0<\/p>\n<p>Primitives: You\u2019ll select backend-dev chat mode \u2192 Auto-triggers security.instructions.md via applyTo: &#8220;auth\/**&#8221; \u2192 Loads context from [Previous auth patterns](.memory.md#security) and [API Security Standards](api-security.context.md#rest) \u2192 Generates user-auth.spec.md using structured templates \u2192 Executes implement-from-spec.prompt.md workflow with validation gates.<\/p>\n<p>Outcome: Developer-driven knowledge accumulation where you capture implementation failures in .memory.md, document successful patterns in .instructions.md, and refine workflows in .prompt.md files\u2014creating compound intelligence that improves through your iterative refinement.<\/p>\n<p>This transformation might seem complex, but notice the pattern: What started as an ad-hoc request became a systematic workflow with clear handoff points, automatic context loading, and built-in validation.\u00a0<\/p>\n<p>When you use these files and modules, you can keep adjusting and improving how your AI agent works at every step. Every time you iterate, you make your agent a little more reliable and consistent. And this isn\u2019t just random trial and error \u2014 you\u2019re following a structured, repeatable approach that helps you get better and more predictable results every time you use the AI.<\/p>\n<p>\ud83d\udca1 Native VS Code support: While VS Code natively supports .instructions.md, .prompt.md, and .chatmode.md files, this framework takes things further with .spec.md, .memory.md, and .context.md patterns that unlock even more exciting possibilities AI-powered software development.<\/p>\n<p>With your prompts structured and your agentic primitives set up, you may encounter a new challenge: Even the best prompts and primitives can fail when they\u2019re faced with irrelevant context or they\u2019re competing for limited AI attention. The third layer, which we\u2019ll get to next, addresses this through strategic context management.<\/p>\n<p>Layer 3: Context engineering: Helping your AI agents focus on what matters<\/p>\n<p>Just like people, LLMs have finite limited memory (context windows), and can sometimes be forgetful. If you can be strategic about the context you give them, you can help them focus on what\u2019s relevant and enable them to get started and work quicker. This helps them preserve valuable context window space and improve their reliability and effectiveness.<\/p>\n<p>Here are some techniques to make sure they get the right context\u2014this is called context engineering:\u00a0<\/p>\n<p>Session splitting: Use distinct agent sessions for different development phases and tasks. For example, use one session for planning, one for implementation, and one for testing. If an agent has fresh context, it\u2019ll have better focus. It\u2019s always better to have a fresh context window for complex tasks.\u00a0<\/p>\n<p>Modular and custom rules and instructions: Apply only relevant instructions through targeted .instructions.md files using applyTo YAML frontmatter syntax. This preserves context space for actual work and reduces irrelevant suggestions.\u00a0<\/p>\n<p>Memory-driven development: Leverage agent memory through .memory.md files to maintain project knowledge and decisions across sessions and time.<\/p>\n<p>Context optimization: Use .context.md context helper files strategically to accelerate information retrieval and reduce cognitive load.\u00a0<\/p>\n<p>Cognitive focus optimization: Use chat modes in .chatmode.md files to keep the AI\u2019s attention on relevant domains and prevent cross-domain interference. Less context pollution means you\u2019ll have more consistent and accurate outputs.\u00a0<\/p>\n<p>Agentic workflows: The complete system in action<\/p>\n<p>Now that you understand all three layers, you can see how they combine into agentic workflows\u2014complete, systematic processes where all of your agentic primitives are working together, understanding your prompts, and using only the context they need.\u00a0\u00a0<\/p>\n<p>These agentic workflows can be implemented as .prompt.md files that coordinate multiple agentic primitives into processes, designed to work whether executed locally in your IDE, in your terminal or in your CI pipelines.<\/p>\n<p>Need a recap?\u00a0<\/p>\n<p>Markdown prompt engineering provides the structural foundation for predictable AI interactions.<\/p>\n<p>Agent primitives are your configurable tools that scale and systematize these techniques.<\/p>\n<p>Context engineering optimizes AI cognitive performance within memory constraints.<\/p>\n<p>Agentic workflows in Markdown apply prompt and context engineering that leverages agent primitives to implement complete, reliable agentic processes.<\/p>\n<p>This framework creates compound intelligence that improves as you continue to iterate.<\/p>\n<p>Now that you understand the three-layer framework and that the agentic primitives are essentially executable software written in natural language, the question is: How can you scale these Markdown files beyond your individual development workflow?<\/p>\n<p>Natural language as code<\/p>\n<p>The answer mirrors every programming ecosystem\u2019s evolution. Just like JavaScript evolved from browser scripts to using Node.js runtimes, package managers, and deployment tooling, agent primitives need similar infrastructure to reach their full potential.<\/p>\n<p>This isn\u2019t just a metaphor: These .prompt.md and .instructions.md files represent a genuine new form of software development that requires proper tooling infrastructure.<\/p>\n<p>Here\u2019s what we mean: Think of your agent primitives as real pieces of software, just written in natural language instead of code. They have all the same qualities: You can break complex tasks into smaller pieces (modularity), use the same instructions in multiple places (reusability), rely on other tools or files (dependencies), keep improving and updating them (evolution), and share them across teams (distribution).<\/p>\n<p>That said, your natural language programs are going to need the same infrastructure support as any other software.\u00a0\u00a0<\/p>\n<p>Agent CLI runtimes<\/p>\n<p>Most developers start by creating and running agent primitives directly in VS Code with GitHub Copilot, which is ideal for interactive development, debugging, and refining daily workflows. However, when you want to move beyond the editor\u2014to automate your workflows, schedule them, or integrate them into larger systems\u2014<a href=\"https:\/\/github.blog\/changelog\/2025-09-25-github-copilot-cli-is-now-in-public-preview\/?utm_source=blog-source-cli-changelog-oct-2025&amp;utm_campaign=agentic-copilot-cli-launch-2026\" rel=\"nofollow noopener\" target=\"_blank\">you need agent CLI runtimes like Copilot CLI<\/a>.\u00a0<\/p>\n<p>These runtimes let you execute your agent primitives from the command line and tap into advanced model capabilities. This shift unlocks automation, scaling, and seamless integration into production environments, taking your natural language programs from personal tools to powerful, shareable solutions.\u00a0<\/p>\n<p><a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/tooling\/#inner-loop-vs-outer-loop\" rel=\"nofollow noopener\" target=\"_blank\">Inner loop vs. outer loop<\/a><\/p>\n<p>Inner loop (VS Code and GitHub Copilot): Interactive development, testing, and workflow refinement<\/p>\n<p>Outer loop (agent CLI runtimes): Reproducible execution, CI\/CD integration, and production deployment<\/p>\n<p>Agent CLI Runtimes transform your agent primitives from IDE-bound files into independently executable workflows that run consistently across any environment. They provide command-line execution, CI\/CD integration, environment consistency, and native support for MCP servers, which bridge your development work to production reality.<\/p>\n<p>TL;DR: Use the inner loop for rapid, interactive work and the outer loop for reliable, repeatable automation and deployment.<\/p>\n<p>Runtime management<\/p>\n<p>While VS Code and GitHub Copilot handle individual development, some teams may want additional infrastructure for sharing, versioning, and productizing their agent primitives. Managing multiple Agent CLI runtimes can become complex quickly, with different installation procedures, configuration requirements, and compatibility matrices.<\/p>\n<p><a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM (Agent Package Manager)<\/a> solves this by providing unified runtime management and package distribution. Instead of manually installing and configuring each vendor CLI, APM handles the complexity while preserving your existing VS Code workflow.<\/p>\n<p>Here\u2019s how runtime management works in practice:<\/p>\n<p># Install APM once<br \/>\ncurl -sSL https:\/\/raw.githubusercontent.com\/danielmeppiel\/apm\/main\/install.sh | sh<\/p>\n<p># Optional: setup your GitHub PAT to use GitHub Copilot CLI<br \/>\nexport GITHUB_COPILOT_PAT=your_token_here<\/p>\n<p># APM manages runtime installation for you<br \/>\napm runtime setup copilot          # Installs GitHub Copilot CLI<br \/>\napm runtime setup codex            # Installs OpenAI Codex CLI<\/p>\n<p># Install MCP dependencies (like npm install)<br \/>\napm install<\/p>\n<p># Compile Agent Primitive files to Agents.md files<br \/>\napm compile<\/p>\n<p># Run workflows against your chosen runtime<br \/>\n# This will trigger &#8216;copilot -p security-review.prompt.md&#8217; command<br \/>\n# Check the example apm.yml file a bit below in this guide<br \/>\napm run copilot-sec-review &#8211;param pr_id=123<\/p>\n<p>As you can see, your daily development stays exactly the same in VS Code, APM installs and configures runtimes automatically, your workflows run regardless of which runtime is installed, and the same apm run command works consistently across all runtimes.<\/p>\n<p>Distribution and packaging<\/p>\n<p>Agent primitives\u2019 similarities to traditional software become most apparent when you get to the point of wanting to share them with your team or deploying them into production\u2014when you start to require things like package management, dependency resolution, version control, and distribution mechanisms.<\/p>\n<p>Here\u2019s the challenge: You\u2019ve built powerful agent primitives in VS Code and your team wants to use them, but distributing Markdown files and ensuring consistent MCP dependencies across different environments becomes unwieldy. You need the equivalent of npm for natural language programs.<\/p>\n<p><a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM<\/a> provides this missing layer. It doesn\u2019t replace your VS Code workflow\u2014it extends it by creating distributable packages of agent primitives complete with dependencies, configuration, and runtime compatibility that teams can share, just like npm packages.<\/p>\n<p><a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/tooling\/#package-management-in-practice\" rel=\"nofollow noopener\" target=\"_blank\">Package management in practice<\/a><\/p>\n<p># Initialize new APM project (like npm init)<br \/>\napm init security-review-workflow<\/p>\n<p># Develop and test your workflow locally<br \/>\ncd security-review-workflow<br \/>\napm compile &amp;&amp; apm install<br \/>\napm run copilot-sec-review &#8211;param pr_id=123<\/p>\n<p># Package for distribution (future: apm publish)<br \/>\n# Share apm.yml and Agent Primitive files with team<br \/>\n# Team members can install and use your primitives<br \/>\ngit clone your-workflow-repo<br \/>\ncd your-workflow-repo &amp;&amp; apm compile &amp;&amp; apm install<br \/>\napm run copilot-sec-review &#8211;param pr_id=456<\/p>\n<p>The benefits compound quickly: You can distribute tested workflows as versioned packages with dependencies, automatically resolve and install required MCP servers, track workflow evolution and maintain compatibility across updates, build on (and contribute to) shared libraries from the community, and ensure everyone\u2019s running the same thing.<\/p>\n<p><a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/tooling\/#project-configuration\" rel=\"nofollow noopener\" target=\"_blank\">Project configuration<\/a><\/p>\n<p>The following\u00a0 apm.yml configuration file serves as the package.json equivalent for agent primitives, defining scripts, dependencies, and input parameters:<\/p>\n<p># apm.yml &#8211; Project configuration (like package.json)<br \/>\nname: security-review-workflow<br \/>\nversion: 1.2.0<br \/>\ndescription: Comprehensive security review process with GitHub integration<\/p>\n<p>scripts:<br \/>\n  copilot-sec-review: &#8220;copilot &#8211;log-level all &#8211;log-dir copilot-logs &#8211;allow-all-tools -p security-review.prompt.md&#8221;<br \/>\n  codex-sec-review: &#8220;codex security-review.prompt.md&#8221;<br \/>\n  copilot-debug: &#8220;copilot &#8211;log-level all &#8211;log-dir copilot-logs &#8211;allow-all-tools -p security-review.prompt.md&#8221;<\/p>\n<p>dependencies:<br \/>\n  mcp:<br \/>\n    &#8211; ghcr.io\/github\/github-mcp-server<\/p>\n<p>With this, your agent primitives can now be packaged as distributable software with managed dependencies.<\/p>\n<p>Production deployment<\/p>\n<p>The final piece of the tooling ecosystem enables continuous AI: packaged agent primitives can now run automatically in the same CI\/CD pipelines you use every day, bringing your carefully developed workflows into your production environment.<\/p>\n<p>Using <a href=\"https:\/\/github.com\/marketplace\/actions\/apm-agent-package-manager\" rel=\"nofollow noopener\" target=\"_blank\">APM GitHub Action<\/a>, and building on the security-review-workflow package example above, here\u2019s how the same APM project deploys to production with multi-runtime flexibility:<\/p>\n<p># .github\/workflows\/security-review.yml<br \/>\nname: AI Security Review Pipeline<br \/>\non:<br \/>\n  pull_request:<br \/>\n    types: [opened, synchronize]<\/p>\n<p>jobs:<br \/>\n  security-analysis:<br \/>\n    runs-on: ubuntu-latest<br \/>\n    strategy:<br \/>\n      matrix:<br \/>\n        # Maps to apm.yml scripts<br \/>\n        script: [copilot-sec-review, codex-sec-review, copilot-debug]<br \/>\n    permissions:<br \/>\n      models: read<br \/>\n      pull-requests: write<br \/>\n      contents: read<\/p>\n<p>    steps:<br \/>\n    &#8211; uses: actions\/checkout@v4<\/p>\n<p>    &#8211; name: Run Security Review (${{ matrix.script }})<br \/>\n      uses: danielmeppiel\/action-apm-cli@v1<br \/>\n      with:<br \/>\n        script: ${{ matrix.script }}<br \/>\n        parameters: |<br \/>\n          {<br \/>\n            &#8220;pr_id&#8221;: &#8220;${{ github.event.pull_request.number }}&#8221;<br \/>\n          }<br \/>\n      env:<br \/>\n        GITHUB_COPILOT_PAT: ${{ secrets.COPILOT_CLI_PAT }}<\/p>\n<p>Key connection: The matrix.script values (copilot-sec-review, codex-sec-review, copilot-debug) correspond exactly to the scripts defined in the apm.yml configuration above. <a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM<\/a> automatically installs the MCP dependencies (ghcr.io\/github\/github-mcp-server) and passes the input parameters (pr_id) to your security-review.prompt.md workflow.<\/p>\n<p>Here\u2019s why this matters:\u00a0<\/p>\n<p>Automation: Your AI workflows now run on their own, without anyone needing to manually trigger them.<\/p>\n<p>Reliability: They run with the same consistency and reproducibility as traditional code deployments.<\/p>\n<p>Flexibility: You can run different versions or types of analysis (mapped to different scripts) as needed.<\/p>\n<p>Integration: These workflows become part of your organization\u2019s standard CI\/CD pipelines, just like regular software quality checks.<\/p>\n<p>This setup ultimately means your agent primitives are no longer just local experiments\u2014they are fully automated tools that you can rely on as part of your software delivery process, running in CI\/CD whenever needed, with all dependencies and parameters managed for you.<\/p>\n<p>Ecosystem evolution<\/p>\n<p>This progression follows the same predictable pattern as every successful programming ecosystem. Understanding this pattern helps you see where AI-native development is heading and how to position your work strategically.<\/p>\n<p>The evolution happens in four stages:<\/p>\n<p>Raw Code \u2192 agent primitives (.prompt.md, .instructions.md files)<\/p>\n<p>Runtime environments \u2192 Agent CLI runtimes\u00a0<\/p>\n<p>Package management \u2192 <a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM<\/a> (distribution and orchestration layer)<\/p>\n<p>Thriving ecosystem \u2192 Shared libraries, tools, and community packages<\/p>\n<p>Just as npm enabled JavaScript\u2019s explosive growth by solving the package distribution problem, <a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM<\/a> enables the agent primitive ecosystem to flourish by providing the missing infrastructure layer that makes sharing and scaling natural language programs practical.<\/p>\n<p>The transformation is profound: what started as individual Markdown files in your editor becomes a systematic software development practice with proper tooling, distribution, and production deployment capabilities.<\/p>\n<p>Key takeaways<\/p>\n<p>Agent primitives are software: Your .prompt.md and .instructions.md files represent executable natural language programs that deserve professional tooling infrastructure.<\/p>\n<p>Runtime diversity enables scale: Agent CLI runtimes provide the execution environments that bridge development to production.<\/p>\n<p>Package management is critical: <a href=\"https:\/\/github.com\/danielmeppiel\/apm\" rel=\"nofollow noopener\" target=\"_blank\">APM<\/a> provides the npm-equivalent layer that makes agent primitives truly portable and shareable.<\/p>\n<p>Production ready today: This tooling stack enables automated AI workflows in CI\/CD pipelines with enterprise-grade reliability.<\/p>\n<p>Ecosystem growth pattern: Package management infrastructure creates the foundation for thriving ecosystems of shared workflows, tools, and community libraries.<\/p>\n<p>How to get started with building your first agent primitive<\/p>\n<p>Now it\u2019s time to build your first agent primitives. Here\u2019s the plan:\u00a0<\/p>\n<p>Start with instructions: Write clear instructions that tell the AI exactly what you want it to do and how it should behave.<\/p>\n<p>Add chat modes: Set up special rules (chat modes) to create safe boundaries for the AI, making sure it interacts in the way you want and avoids unwanted behavior.<\/p>\n<p>Build reusable prompts: Create prompt templates for tasks you do often, so you don\u2019t have to start from scratch each time. These templates help the AI handle common jobs quickly and consistently.<\/p>\n<p>Create specification templates: Make templates that help you plan out what you want your AI to accomplish, then turn those plans into actionable steps the AI can follow.<\/p>\n<p>Instructions architecture<\/p>\n<p>Instructions form the bedrock of reliable AI behavior: They\u2019re the rules that guide the agent without cluttering your immediate context. Rather than repeating the same guidance in every conversation, instructions embed your team\u2019s knowledge directly into the AI\u2019s reasoning process.<\/p>\n<p>The key insight is modularity: instead of one massive instruction file that applies everywhere, you can create targeted files that activate only when working with specific technologies or file types. This context engineering approach keeps your AI focused and your guidance relevant.<\/p>\n<p>\u2705 Quick actions:<\/p>\n<p>\ud83d\udd27 <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#-tools--files\" rel=\"nofollow noopener\" target=\"_blank\">Tools and files:<\/a><\/p>\n<p>.github\/<br \/>\n\u251c\u2500\u2500 copilot-instructions.md          # Global repository rules<br \/>\n\u2514\u2500\u2500 instructions\/<br \/>\n    \u251c\u2500\u2500 frontend.instructions.md     # applyTo: &#8220;**\/*.{jsx,tsx,css}&#8221;<br \/>\n    \u251c\u2500\u2500 backend.instructions.md      # applyTo: &#8220;**\/*.{py,go,java}&#8221;<br \/>\n    \u2514\u2500\u2500 testing.instructions.md      # applyTo: &#8220;**\/test\/**&#8221;<\/p>\n<p>Example: Markdown prompt engineering in Instructions with <a href=\"http:\/\/frontend.instructions.md\" rel=\"nofollow noopener\" target=\"_blank\">frontend.instructions.md<\/a>:<\/p>\n<p>&#8212;<br \/>\napplyTo: &#8220;**\/*.{ts,tsx}&#8221;<br \/>\ndescription: &#8220;TypeScript development guidelines with context engineering&#8221;<br \/>\n&#8212;<br \/>\n# TypeScript Development Guidelines<\/p>\n<p>## Context Loading<br \/>\nReview [project conventions](..\/docs\/conventions.md) and<br \/>\n[type definitions](..\/types\/index.ts) before starting.<\/p>\n<p>## Deterministic Requirements<br \/>\n&#8211; Use strict TypeScript configuration<br \/>\n&#8211; Implement error boundaries for React components<br \/>\n&#8211; Apply ESLint TypeScript rules consistently<\/p>\n<p>## Structured Output<br \/>\nGenerate code with:<br \/>\n&#8211; [ ] JSDoc comments for all public APIs<br \/>\n&#8211; [ ] Unit tests in `__tests__\/` directory<br \/>\n&#8211; [ ] Type exports in appropriate index files<\/p>\n<p>\u26a0\ufe0f Checkpoint: Instructions are context-efficient and non-conflicting.<\/p>\n<p>Chat modes configuration<\/p>\n<p>With your instruction architecture in place, you still need a way to enforce domain boundaries and prevent AI agents from overstepping their expertise. Chat modes solve this by creating professional boundaries similar to real-world licensing. For example, you\u2019d want your architect to plan a bridge and not build it themself.\u00a0<\/p>\n<p>Here\u2019s how to set those boundaries:\u00a0<\/p>\n<p>Define domain-specific <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/chat\/chat-modes\" rel=\"nofollow noopener\" target=\"_blank\">custom chat modes<\/a> with MCP tool boundaries.<\/p>\n<p>Encapsulate tech stack knowledge and guidelines per mode.<\/p>\n<p>Define the most appropriate <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/chat\/chat-modes#_chat-mode-file-example\" rel=\"nofollow noopener\" target=\"_blank\">LLM model<\/a> for your chat mode.<\/p>\n<p>Configure secure <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/chat\/chat-modes#_chat-mode-file-example\" rel=\"nofollow noopener\" target=\"_blank\">MCP tool access<\/a> to prevent cross-domain security breaches.<\/p>\n<p>\ud83d\udca1 Security through MCP tool boundaries: Each chat mode receives only the specific MCP tools needed for their domain. Giving each chat mode only the tools it needs keeps your AI workflows safe, organized, and professionally separated\u2014just like real-world roles and permissions.<\/p>\n<p>\ud83d\udd27 <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#-tools--files-1\" rel=\"nofollow noopener\" target=\"_blank\">Tools and files:<\/a><\/p>\n<p>.github\/<br \/>\n\u2514\u2500\u2500 chatmodes\/<br \/>\n    \u251c\u2500\u2500 architect.chatmode.md             # Planning specialist &#8211; designs, cannot execute<br \/>\n    \u251c\u2500\u2500 frontend-engineer.chatmode.md     # UI specialist &#8211; builds interfaces, no backend access<br \/>\n    \u251c\u2500\u2500 backend-engineer.chatmode.md      # API specialist &#8211; builds services, no UI modification<br \/>\n    \u2514\u2500\u2500 technical-writer.chatmode.md      # Documentation specialist &#8211; writes docs, cannot run code<\/p>\n<p>Example: Creating MCP tool boundaries with <a href=\"http:\/\/backend-engineer.chatmode.md\" rel=\"nofollow noopener\" target=\"_blank\">backend-engineer.chatmode.md<\/a>:<\/p>\n<p>&#8212;<br \/>\ndescription: &#8216;Backend development specialist with security focus&#8217;<br \/>\ntools: [&#8216;changes&#8217;, &#8216;codebase&#8217;, &#8216;editFiles&#8217;, &#8216;runCommands&#8217;, &#8216;runTasks&#8217;,<br \/>\n        &#8216;search&#8217;, &#8216;problems&#8217;, &#8216;testFailure&#8217;, &#8216;terminalLastCommand&#8217;]<br \/>\nmodel: Claude Sonnet 4<br \/>\n&#8212;<br \/>\nYou are a backend development specialist focused on secure API development, database design, and server-side architecture. You prioritize security-first design patterns and comprehensive testing strategies.<\/p>\n<p>## Domain Expertise<br \/>\n&#8211; RESTful API design and implementation<br \/>\n&#8211; Database schema design and optimization<br \/>\n&#8211; Authentication and authorization systems<br \/>\n&#8211; Server security and performance optimization<\/p>\n<p>You master the backend of this project thanks to you having read all [the backend docs](..\/..\/docs\/backend).<\/p>\n<p>## Tool Boundaries<br \/>\n&#8211; **CAN**: Modify backend code, run server commands, execute tests<br \/>\n&#8211; **CANNOT**: Modify client-side assets<\/p>\n<p>You can also <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#security--professional-boundaries\" rel=\"nofollow noopener\" target=\"_blank\">create security and professional boundaries<\/a>, including:<\/p>\n<p>Architect mode: Allow access to research tools only, so they can\u2019t execute destructive commands or modify production code.<\/p>\n<p>Frontend engineer mode: Allow access to UI development tools only, so they can\u2019t access databases or backend services.<\/p>\n<p>Backend engineer mode: Allow access to API and database tools only, so they can\u2019t modify user interfaces or frontend assets.<\/p>\n<p>Technical writer mode: Allow access to documentation tools only, so they can\u2019t run code, deploy, or access sensitive systems.<\/p>\n<p>\u26a0\ufe0f Checkpoint: Each mode has clear boundaries and tool restrictions.<\/p>\n<p>Agentic workflows<\/p>\n<p>Agentic workflows can be implemented as reusable .prompt.md files that orchestrate all your primitives into systematic, repeatable end-to-end processes. These can be executed locally or delegated to independent agents. Here\u2019s how to get started:\u00a0<\/p>\n<p>Create <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/copilot-customization#_prompt-files-experimental\" rel=\"nofollow noopener\" target=\"_blank\">.prompt.md files<\/a> for complete development processes.<\/p>\n<p>Build in mandatory human reviews.<\/p>\n<p>Design workflows for both local execution and independent delegation.<\/p>\n<p>\ud83d\udd27 <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#-tools--files-2\" rel=\"nofollow noopener\" target=\"_blank\">Tools and files:<\/a><\/p>\n<p>.github\/prompts\/<br \/>\n\u251c\u2500\u2500 code-review.prompt.md           # With validation checkpoints<br \/>\n\u251c\u2500\u2500 feature-spec.prompt.md          # Spec-first methodology<br \/>\n\u2514\u2500\u2500 async-implementation.prompt.md  # GitHub Coding Agent delegation<\/p>\n<p>Example: Complete agentic workflow with <a href=\"http:\/\/feature-spec.prompt.md\" rel=\"nofollow noopener\" target=\"_blank\">feature-spec.prompt.md<\/a>:<\/p>\n<p>&#8212;<br \/>\nmode: agent<br \/>\nmodel: gpt-4<br \/>\ntools: [&#8216;file-search&#8217;, &#8216;semantic-search&#8217;, &#8216;github&#8217;]<br \/>\ndescription: &#8216;Feature implementation workflow with validation gates&#8217;<br \/>\n&#8212;<br \/>\n# Feature Implementation from Specification<\/p>\n<p>## Context Loading Phase<br \/>\n1. Review [project specification](${specFile})<br \/>\n2. Analyze [existing codebase patterns](.\/src\/patterns\/)<br \/>\n3. Check [API documentation](.\/docs\/api.md)<\/p>\n<p>## Deterministic Execution<br \/>\nUse semantic search to find similar implementations<br \/>\nUse file search to locate test patterns: `**\/*.test.{js,ts}`<\/p>\n<p>## Structured Output Requirements<br \/>\nCreate implementation with:<br \/>\n&#8211; [ ] Feature code in appropriate module<br \/>\n&#8211; [ ] Comprehensive unit tests (&gt;90% coverage)<br \/>\n&#8211; [ ] Integration tests for API endpoints<br \/>\n&#8211; [ ] Documentation updates<\/p>\n<p>## Human Validation Gate<br \/>\n\ud83d\udea8 **STOP**: Review implementation plan before proceeding to code generation.<br \/>\nConfirm: Architecture alignment, test strategy, and breaking change impact.<\/p>\n<p>\u26a0\ufe0f Checkpoint: As you can see, these prompts include explicit validation gates.<\/p>\n<p>Specification templates<\/p>\n<p>There\u2019s often a gap between planning (coming up with what needs to be built) and implementation (actually building it). Without a clear, consistent way to document requirements, things can get lost in translation, leading to mistakes, misunderstandings, or missed steps. This is where specification templates come in. These templates ensure that both people and AI agents can take a concept (like a new feature or API) and reliably implement it.\u00a0<\/p>\n<p>Here\u2019s what these templates help you accomplish:\u00a0<\/p>\n<p>Standardize the process: You create a new specification for each feature, API endpoint, or component.<\/p>\n<p>Provide blueprints for implementation: These specs include everything a developer (or an AI agent) needs to know to start building: the problem, the approach, required components, validation criteria, and a checklist for handoff.<\/p>\n<p>Make handoff deterministic: By following a standard, the transition from planning to doing is clear and predictable.<\/p>\n<p>\ud83d\udd27 <a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#-tools--files-3\" rel=\"nofollow noopener\" target=\"_blank\">Tools and files:<\/a>\u00a0<\/p>\n<p><a href=\"https:\/\/github.com\/github\/spec-kit?utm_source=blog-spec-kit-first-oct-2025&amp;utm_campaign=blog-spec-kit-repo-oct-2025\" rel=\"nofollow noopener\" target=\"_blank\">Spec-kit<\/a> is a neat tool that fully implements a specification-driven approach to agentic coding. It allows you to easily get started with creating specs (spec.md), an implementation plan (plan.md) and splitting that into actual tasks (tasks.md) ready for developers or coding agents to work on.<\/p>\n<p>\u26a0\ufe0f \ufe0fCheckpoint: Specifications are split into tasks that are implementation-ready before delegation.<\/p>\n<p>Ready to go? Here\u2019s a quickstart checklist<\/p>\n<p>You now have a complete foundation for systematic AI development. The checklist below walks through the implementation sequence, building toward creating complete agentic workflows.<\/p>\n<p><a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#conceptual-foundation\" rel=\"nofollow noopener\" target=\"_blank\">Conceptual foundation<\/a><\/p>\n<p>Understand Markdown prompt engineering principles (semantic structure, precision, and tools).<\/p>\n<p>Grasp context engineering fundamentals (context window optimization and session strategy).<\/p>\n<p><a href=\"https:\/\/danielmeppiel.github.io\/awesome-ai-native\/docs\/getting-started\/#implementation-steps\" rel=\"nofollow noopener\" target=\"_blank\">Implementation steps<\/a><\/p>\n<p>Create <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/copilot-customization#_use-a-githubcopilot-instructionsmd-file\" rel=\"nofollow noopener\" target=\"_blank\">.github\/copilot-instructions.md<\/a> with basic project guidelines (context engineering: global rules).<\/p>\n<p>Set up domain-specific <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/copilot-customization#_use-instructionsmd-files\" rel=\"nofollow noopener\" target=\"_blank\">.instructions.md files<\/a> with applyTo patterns (context engineering: selective loading).<\/p>\n<p>Configure <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/copilot-customization#_custom-chat-modes\" rel=\"nofollow noopener\" target=\"_blank\">chat modes<\/a> for your tech stack domains (context engineering: domain boundaries).<\/p>\n<p>Create your first <a href=\"https:\/\/code.visualstudio.com\/docs\/copilot\/copilot-customization#_prompt-files-experimental\" rel=\"nofollow noopener\" target=\"_blank\">.prompt.md agentic workflow<\/a>.<\/p>\n<p>Build your first .spec.md template for feature specifications (you can use <a href=\"https:\/\/github.com\/github\/spec-kit\" rel=\"nofollow noopener\" target=\"_blank\">spec-kit<\/a> for this).<\/p>\n<p>Practice a spec-driven approach with session splitting: <a href=\"https:\/\/github.com\/github\/spec-kit?tab=readme-ov-file#4-create-a-technical-implementation-plan\" rel=\"nofollow noopener\" target=\"_blank\">plan<\/a> first, split into <a href=\"https:\/\/github.com\/github\/spec-kit?tab=readme-ov-file#5-break-down-into-tasks\" rel=\"nofollow noopener\" target=\"_blank\">tasks<\/a>, and lastly, <a href=\"https:\/\/github.com\/github\/spec-kit?tab=readme-ov-file#6-execute-implementation\" rel=\"nofollow noopener\" target=\"_blank\">implement<\/a>.<\/p>\n<p>Take this with you<\/p>\n<p>Working with AI agents shouldn\u2019t have to be unpredictable. With the right planning and tools, these agents can quickly become a reliable part of your workflow and processes\u2014boosting not only your own productivity, but your team\u2019s too.\u00a0<\/p>\n<p>Ready for the next phase of multi-agent coordination delegation? <a href=\"https:\/\/github.blog\/ai-and-ml\/how-to-build-reliable-ai-workflows-with-agentic-primitives-and-context-engineering\/?utm_source=blog-release-oct-2025&amp;utm_campaign=agentic-copilot-cli-launch-2025\" rel=\"nofollow noopener\" target=\"_blank\">Try GitHub Copilot CLI to get started &gt;\u00a0<\/a><\/p>\n<p>\t\tWritten by\t<\/p>\n<p>\t\t\t\t\t<img class=\"d-block circle\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/51440732\" alt=\"Daniel Meppiel\" width=\"80\" height=\"80\" loading=\"lazy\" decoding=\"async\"\/><\/p>\n<p>Daniel Meppiel is a software global black belt at Microsoft where he focuses on helping developers achieve more through AI.<\/p>\n","protected":false},"excerpt":{"rendered":"Many developers begin their AI explorations with a prompt. Perhaps you started the same way: You opened GitHub&hellip;\n","protected":false},"author":2,"featured_media":213229,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-213228","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/213228","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=213228"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/213228\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/213229"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=213228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=213228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=213228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}