AI Agents are a powerful way to unlock value from your solution — whether by generating insights, summarizing work, or automating routine analysis. But the quality of their output depends heavily on the quality of the prompt they’re given.
A strong prompt acts like a well-written instruction: it clearly communicates what the AI should focus on, where to look for information, and how to present it back. This guide outlines best practices for building strong, reliable prompts that consistently produce useful results.
1. Provide Clear Context
The first and most important step in writing a good prompt is to establish context.
When an AI Agent runs, it does not automatically “know” what the user is trying to accomplish — it only knows what the prompt tells it. A vague prompt like “Summarize this” may lead to short, incomplete, or even irrelevant responses. A clear prompt, on the other hand, ensures the AI understands:
- What is being analyzed (e.g., an Initiative, a Workstream, or a Program)
- Why the analysis matters (e.g., to inform leadership, track progress, or surface risks)
- How the output will be used (e.g., as part of a report, dashboard, or automated workflow)
The more specific the context, the more targeted and relevant the output will be.
Example of a strong contextual prompt:
“Summarize this Initiative’s progress and key risks to support the weekly executive review. Focus on delivery status, upcoming milestones, and any schedule concerns.”
Example of a weak prompt:
“Summarize this.”
Tip: Don’t assume the AI “understands” your organizational structure. A few words of framing go a long way toward more accurate results.
2. Leverage Injected Expressions
One of the most powerful features of prompting in Shibumi is the ability to inject attribute values directly into the prompt. Injected expressions allow you to provide the AI with specific, structured information from the work item being evaluated. This makes the prompt both dynamic and grounded in the most up-to-date data.
For example, instead of writing a static prompt, you can include injected expressions like {{Status__c}}, {Planned_Due_Date__c} or {{description}}. When the Agent runs, those placeholders are automatically replaced with the real values from the work item. This ensures that:
- The AI is not making assumptions about key details.
- Summaries reflect live data.
- Prompts can be reused consistently across many items.
Example with injected expressions:
“Summarize this Initiative’s current state. The current RYG status is {Status__c}, and the planned due date is {Planned_Due_Date__c}. Focus on how these values reflect progress toward goals.”
This approach makes your prompt smarter and more maintainable. Instead of rewriting prompts each time data changes, the injected expressions keep everything current.
Tip: Use injected expressions to anchor the AI’s response around your most important attributes (e.g., schedule, scope, risk, or owner).
3. Define a Clear Output
Even with good context and injected data, the AI still needs direction on how to respond. If the output format isn’t defined, the AI may produce inconsistent or overly verbose answers. By setting expectations up front, you help standardize outputs across different runs and work items.
When defining output, think about:
- Format: Should the response be a paragraph, a few bullet points, or a structured list?
- Tone: Should it sound formal, concise, executive-ready, or conversational?
- Focus: What should the AI highlight or ignore?
Good prompt defining clear output:
“Summarize the Initiative’s current progress in 2–3 bullet points suitable for an executive dashboard. Include milestone progress, risks, and upcoming deadlines. Keep the tone concise and professional.”
This creates a more predictable and repeatable output, which is especially valuable when AI is used to automate reporting across many Initiatives or Programs.
Tip: Establish a small set of output formats (e.g., short bullet summary, narrative paragraph, or structured report) and use them consistently.
4. Incorporate Descendant Work Item Data
In many cases, the most valuable insights don’t come from a single work item — they come from the roll-up of information from multiple descendant items. For example, an executive might want a summary at the Program level that reflects the overall status of its Workstreams or Initiatives.
By incorporating descendant data into your prompt, you allow the AI to:
- Identify trends or risks across child items.
- Aggregate progress for a holistic view.
- Highlight outliers or areas of concern that may be buried in lower levels of the hierarchy.
Example prompt using descendant data:
“Provide a summary of overall Program health by analyzing the Status and Forecast Dates of all child Workstreams. Highlight any Workstreams that are delayed or at risk, and summarize the primary drivers.”
This approach helps transform scattered data into clear, actionable insight — without requiring manual rollups.
Tip: Use descendant data thoughtfully. You don’t need to include every attribute — just the ones that matter for the summary you want to generate.
5. Iterate and Refine
Strong prompting is often built through iteration. Even well-structured prompts may need small adjustments to improve clarity, tone, or accuracy. Testing and refining prompts helps ensure they:
- Produce consistent results across different work items.
- Reflect the right level of detail for the audience.
- Handle edge cases (e.g., missing or unexpected data).
Here are a few best practices for iteration:
- Start simple: Begin with a clear, basic prompt and test it on a few records.
- Adjust for tone and structure: Make small edits and compare outputs.
- Document good prompts: Once a prompt works well, it can serve as a reusable template.
Example of refinement:
Initial prompt → “Summarize the Initiative.”
Refined prompt → “Summarize the Initiative’s key highlights in 2–3 bullet points. Focus on milestone progress, risk indicators, and overall health. Keep it concise and business-friendly.”
Tip: Think of prompting like configuration — not copywriting. A prompt that works consistently is more valuable than one that sounds perfect but produces variable results.
Bringing It All Together
A strong prompt in Shibumi blends multiple elements:
- Context sets the stage for the AI’s understanding.
- Injected Expressions ground the response in real, live data.
- Defined Output provides structure and predictability.
- Descendant Data enables meaningful roll-up summaries.
- Iteration ensures your prompt evolves into a reliable, reusable asset.
By applying these principles, your AI Agents will generate clearer, more accurate, and more actionable outputs — ultimately making your solution smarter and more impactful.