AI Prompt Action

Prev Next

AI Prompt Action

The AI Prompt Action allows you to leverage large language models (LLMs) directly within your Tulip automations. This action provides a flexible way to process text, analyze data, and generate intelligent responses based on custom prompts you define.

How It Works

The AI Prompt Action is straightforward: it takes your input message and sends it directly to a large language model, then returns the AI-generated response. You can use this action to perform various text processing tasks, from simple categorization to complex analysis and summarization.

Configuration

Input Message

The input message is the prompt that will be sent directly to the large language model. This should contain:

  • Clear instructions for what you want the AI to do
  • Any data or context the AI needs to process
  • Specific formatting requirements for the output

Working with Objects

When sending objects or object lists to the AI Prompt Action, you must wrap them in the TOTEXT() expression so the AI can read the content properly.

Example:

"Analyze this defect data:" + TOTEXT(@variable.my_object)

Writing Effective Prompts

Be Clear and Specific

Write prompts that clearly explain what you want the AI to do. Avoid ambiguous language and provide specific instructions.

Good: "Categorize this defect as 'Critical', 'Major', or 'Minor' based on the severity description."
Poor: "What kind of defect is this?"

Provide Context

Include relevant background information and explain the purpose of the task.

Example:

"You are analyzing manufacturing defects for quality control. 
Based on the following defect description, categorize it as Critical, Major, or Minor:
- Critical: Safety issues or complete product failure
- Major: Functional problems that affect performance
- Minor: Cosmetic or minor functional issues

Respond with only the category name.

Defect description:" + TOTEXT(@variable.defect_data)

Use Examples

When possible, include examples of the desired input and output format to guide the AI's responses.

Example:

"Summarize the following production data in 2-3 sentences, focusing on key metrics and trends.

Example format: "Production completed X units with Y% efficiency. The main bottleneck was Z, resulting in A minutes of downtime."

Production data:" + TOTEXT(@variable.production_summary)

Common Use Cases

Defect Categorization

Automatically categorize defects based on descriptions, images, or sensor data.

"Categorize this defect based on the description and assign a priority level (High/Medium/Low):

Defect details:" + TOTEXT(@variable.defect_record) +

"Respond in format: Category: [category], Priority: [priority] "

Data Summarization

Generate concise summaries of production reports, quality metrics, or operational data.

"Create a brief executive summary of today's production metrics: "

 + TOTEXT(@variable.daily_production_data)+

"Focus on: output volumes, efficiency rates, quality scores, and any notable issues.
Limit to 100 words. "

Text Analysis and Extraction

Extract specific information from unstructured text like maintenance logs or operator notes.

"Extract the root cause and recommended actions from this maintenance log:

Log entry:" + TOTEXT(@variable.maintenance_log) +

"Format response as:
Root Cause: [cause]
Recommended Actions: [actions]"

Content Generation

Generate standardized reports, notifications, or documentation based on data inputs.

"Generate a shift handover report based on this production data:"

+TOTEXT(@variable.shift_data) +

"Include: completed tasks, ongoing issues, next shift priorities.
Use professional tone suitable for management review."

Testing and Validation

Log Input Data

Always test your AI Prompt Action by logging the input message to a table first. This helps you verify that:

  • The prompt is properly formatted
  • Variables are resolving correctly
  • Objects are being converted to text appropriately
  • The AI is receiving complete information

Testing approach:

  1. Create a test table with columns for "Input Message" and "AI Response"
  2. Log the exact input being sent to the AI Prompt Action
  3. Review several examples to ensure consistency
  4. Adjust your prompt based on the logged results

Validate Output Quality

Monitor the AI responses to ensure they meet your quality standards:

  • Check that responses follow the specified format
  • Verify accuracy against known examples
  • Test edge cases and unusual inputs
  • Monitor for consistent performance over time

Handle Variability

AI responses can vary between runs. Consider:

  • Using more specific prompts to reduce variability
  • Implementing validation logic for critical outputs
  • Having fallback procedures for unexpected responses
  • Regular review of AI performance in production

Best Practices

Prompt Engineering

  • Start simple and iterate based on results
  • Test with representative data samples
  • Use consistent terminology and formatting
  • Break complex tasks into smaller, focused prompts

Data Preparation

  • Clean and format input data before sending to AI
  • Use TOTEXT() for all object and list inputs
  • Remove unnecessary information that might confuse the AI
  • Ensure data privacy and security requirements are met

Error Handling

  • Plan for cases where the AI doesn't respond as expected
  • Implement validation checks on AI outputs
  • Have backup procedures for critical processes
  • Monitor and log AI action performance

Performance Optimization

  • Keep prompts concise while maintaining clarity
  • Avoid sending large amounts of unnecessary data
  • Consider batching similar requests when possible
  • Monitor response times and adjust as needed

Troubleshooting

Common Issues

  • Empty or incomplete responses: Check that your prompt is clear and that all required data is included
  • Unexpected format: Be more specific about the desired output format in your prompt
  • Inconsistent results: Add more constraints and examples to your prompt
  • Object data not visible: Ensure you're using TOTEXT() to convert objects to readable text

Debugging Steps

  1. Log the exact input message being sent to the AI
  2. Test the prompt manually with sample data
  3. Gradually simplify the prompt to isolate issues
  4. Check that all variables are resolving properly
  5. Verify object data is being converted correctly with TOTEXT()

The AI Prompt Action opens up powerful possibilities for intelligent automation in your Tulip apps. Start with simple use cases and gradually build more sophisticated prompts as you become familiar with the capabilities and best practices.