Building and Scoping Agents

Prev Next
This content is currently unavailable in Spanish. You are viewing the default (English) version.

Building and Scoping AI Agents in Tulip

Overview

Building and deploying AI agents in Tulip streamlines operations by automating workflows, interpreting data, and providing actionable insights. The foundation of a successful agent is a clear definition of its purpose, boundaries, and intended business value. This article offers practical steps and best practices for planning, designing, and configuring agents reliably for your environment.


Why Scoping Matters

A well-scoped agent:

  • Delivers precise outcomes, minimizing ambiguity.
  • Reduces development time and rework by narrowing focus.
  • Simplifies testing and evaluation with clear success criteria.
  • Builds user trust through reliable, predictable results.

Key Steps in Building & Scoping an AI Agent

1. Define the Agent’s Objective

Start by answering:

  • What specific problem or task will this agent address?
  • Who are the intended users (e.g., operators, supervisors, engineers)?
  • What value or outcome should it provide?
Tip

Write a one-sentence description, e.g.:
“This agent generates a daily summary of shift activities for line supervisors.”


2. Set Boundaries and Constraints

Clearly describe what the agent should and should not do:

  • Included: The types of data, actions, or queries the agent can handle.
  • Excluded: Anything outside of its intended scope.
Example

Include: Queries about work order status, inventory lookups.
Exclude: Modifying user permissions, approving batch releases.


3. Outline Data Requirements

Document what input data the agent needs and what outputs it will produce:

  • Inputs: Data tables, user prompts, context, integrations.
  • Outputs: Reports, responses, suggested actions.
  • Access: System privileges or data sources needed.

4. Design the Agent’s Prompt and Instructions

Draft a specific, clear prompt for the agent. Outline:

  • Its primary role or goal.
  • Tasks to perform.
  • Behavioral guidelines (tone, format, escalation rules).
  • How to handle edge cases or missing data.

5. Select and Configure Tools

List which Tulip tools, APIs, or integrations the agent needs.
Set up access and permissions as required.


6. Define Test Cases and Evals Before Deployment

Develop clear test cases (“evals”) that represent real user scenarios. For each, specify:

  • Input or prompt.
  • Expected output.
  • Criteria for success.

See the Evaluations article for more.


7. Review and Iterate

Share your agent’s scope and configuration for stakeholder feedback.
Test in a sandbox, collect feedback, and iterate before full deployment.


Quick Reference Checklist

  • Objective and user group clearly defined.
  • Tasks in-scope and out-of-scope are listed.
  • Input/output requirements are documented.
  • Agent prompt and instructions are clear.
  • Tools and permissions configured.
  • Test cases (evals) are written.
  • Post-launch review and feedback plan in place.

Further Reading