Tulip Cloud deployment networking requirements

Prev Next

Overview

Tulip is designed to be a versatile tool, but to ensure real-time performance, data integrity, and application stability, a high-quality network connection is required. This article outlines the critical risks of network instability and the quantifiable requirements to mitigate them.

The stability of your connection is the most critical factor. An unstable connection (e.g., intermittent drops, high packet loss, or high jitter) can lead to data loss and significant performance degradation, even with high bandwidth.

Also see: Network Troubleshooting Guide

Who is this article for?

This article is most useful for advanced Tulip users who are deploying in large scale environments. If you are newly deploying Tulip in your organization or are currently in a trial, this information is likely more advanced than your current needs.

Use the links below to navigate to relevant sections for your role.

Continuous evaluation

As deployments scale, app developers should work with their IT and Network teams through the Quantifiable network requirements.

Executive summary

This document outlines the critical network requirements for ensuring the performance, stability, and data integrity of a Tulip Cloud deployment.

Key takeaways

Stability is more important than speed: A 15 Mbps stable, wired connection is vastly superior to a 100+ Mbps unstable connection with high jitter or packet loss.

Network instability causes data loss: Intermittent connections, high latency, or packet loss don't just cause "slowness." They cause the trigger queue to clog and drop events, leading to permanent data loss and critical data integrity (ALCOA) failures.

Mitigation requires a two-part solution: Robust IT Infrastructure (meeting the requirements below) and Resilient App Architecture (e.g., decoupling logic with Automations) are both required.

Metric Recommendation Absolute Maximum (Failure Point)
Bandwidth 15 Mbps 10 Mbps (or calculated min)
Latency < 80ms 100ms
Jitter < 10ms 30ms
Packet Loss 0% < 0.1%

Critical infrastructure policies

Default to wired Ethernet: All stationary Tulip Players must use wired, shielded Ethernet.

Configure and monitor proxies and firewalls: All Tulip domains listed in the Allowlist must be configured for SSL/TLS pass-through (see Allowlist). Deep Packet Inspection (DPI) should be minimized, firewall and proxy performance should be proactively monitored.

Per-station network readiness checklist

For each Tulip Player station, verify:

  • Connection type: Stationary Players use wired, shielded Ethernet (STP, Cat6a, or better).
  • Bandwidth: At least 15 Mbps available downstream and upstream; never below 10 Mbps per active station or the calculated minimum (See Quantifiable Network Requirements)
  • Latency: Average round-trip latency (RTT) to your-instance.tulip.co is < 80 ms; max RTT is < 100 ms
  • Jitter: Jitter () is < 10 ms (and always < 30ms)
  • Packet loss: Measured packet loss is 0% during normal operation, and < 0.1% in worst-case tests.
  • Uptime: The path from the Player to Tulip Cloud does not rely on flaky links (ad-hoc WiFi, guest networks, VPNs with frequent drops, etc.)
  • Proxies / firewalls: All Tulip domains in the Allowlist are configured for SSL/TLS pass-through with no SSL interception, protocol rewriting, tampering, or caching, and preferably no DPI.
  • WebSockets: Proxies and firewalls allow long-lived WebSocket connections over port 443 without idle-timeout or connection-reset issues.
  • Process Liveness: Devices (especially mobile) are configured to avoid aggressive background process hibernation, battery optimization, or OS policies that suspend the Tulip Player, or throttle network requests, TCP connections, or Websocket connections while in use.
  • Resilient apps: Critical apps on this station follow the resilient app architecture patterns (Minimize event rate and API calls in critical trigger paths during operator workflow, prioritize Tulip Tables and minimize long running Connectors).

Instability risks and data integrity

This section explains why a stable connection is critical and what the technical risks are that impact performance and data integrity.

Beyond simple message delay, intermittent connectivity or packet loss can lead to two primary issues:

  1. Event dropping and data loss
    When the network is unstable or offline, operator actions (like button presses, device inputs, or barcode scans) still generate events. These events back up in a queue. Because this queue has a finite limit (100 events by default), if it is exceeded while waiting for the network to recover, all new incoming events are dropped and permanently lost.
  2. System overload and "catch-up" slowness
    When connectivity is restored after an outage, the entire backlog of queued events (up to 100) will attempt to execute in quick succession. This can cause a sudden, massive load on the Tulip platform and any connected third-party systems (via Connector Functions). Especially when coordinated across multiple stations reconnecting due to a shared network connectivity issue. This "catch-up" flood can be a source of the substantial slowness and overload, and it can also lead to API throttling (HTTP 429 errors) and further failures.

How network issues cause data loss in the Trigger Queue

To understand this risk, it's essential to know how triggers execute. See Triggers for details.

  • Single-threaded queue: Tulip's trigger runtime operates on a sequential, single-threaded queue. Each trigger event (e.g., "button pressed") is added to this queue and must complete all its actions before the next event in the queue can even start.
  • Network "clog": A network outage or severe instability causes actions within a trigger (like "Create Table Record" or "Run Connector Function") to hang or enter a lengthy retry state. For example, as of LTS15, a failing table action will retry for up to 2 minutes.
  • The Bottleneck: This single, slow trigger blocks the entire queue. While it's blocked, subsequent operator actions (e.g., 99 more button presses) fill the queue.
  • Dropped events: Once the queue limit is reached (default 100 events), all further events are canceled and dropped. This means if an operator scans a 101st item during the network outage, that event is lost forever and will not be processed when the network returns.

Even outside of a full queue, general network instability can cause individual Connector Functions (API calls) to fail. These actions do not retry (unlike table actions in LTS15), and can lead to silent data failures and integrity issues.

Risk factors for data loss

The risk of data loss during poor network connectivity is not uniform. It is substantially increased by app architectures that are "event-heavy". Learn about best practices for these apps here.

You are at a much higher risk if your apps have:

  • A high event rate: Apps that generate many events in a short time. This includes rapid barcode scanning, frequent device inputs, or timer triggers firing at short intervals (e.g., every 30 seconds).
  • Complex Triggers: Triggers that execute many sequential actions, especially multiple API calls (Connector Functions) or table writes. Each network-dependent action adds to the potential "clog" time during an outage.
  • Reliance on Connector Functions: Using Connector Functions for critical data capture. A failed Connector Function due to a network blip will not retry and will fail the trigger, which can halt the process or lead to data loss. Use automations (and/or functions) where possible.

ALCOA compliance for life sciences

For customers in Life Sciences and other GxP-regulated environments, these network risks are not just performance issues; they are direct threats to data integrity and compliance.

The technical risks described above map directly to ALCOA principles:

  • Inaccurate and incomplete: When the trigger queue limit is reached during an outage, all subsequent events are permanently dropped. If an operator scans a 101st item, that event is lost forever. The resulting batch record is fundamentally Inaccurate and Incomplete.
  • Not contemporaneous: When connectivity is restored, the "catch-up" flood executes a backlog of queued events, often minutes after the operator performed the action. An auditor reviewing a cluster of timestamps that are clearly delayed from the real-world process would challenge the contemporaneous integrity of the record.
  • Inaccurate (across systems): Connector Functions (API calls) do not have built-in retries. A network blip can cause a trigger to successfully write to a Tulip Table (the original record) but fail to update an external MES or QMS. This results in an inaccurate and out-of-sync data state across your validated systems.
Technical Issue ALCOA Impact Example
Queue overflow after outage Incomplete/inaccurate 101st scan dropped => batch record missing one unit
Catch-up execution with delayed time Not contemporaneous All scans timestamped minutes after actual scans
ERP update fails, Tulip succeeds Cross-system inaccurate Tulip shows released, ERP shows "in progress"

Following the architectural recommendations in the next section is essential for building a robust, validatable process that defends against data integrity failures.

Quantifiable network requirements

To prevent the trigger queue blockages, dropped events, and data integrity failures discussed previously, each Tulip Player station must maintain a high-quality, persistent connection. The following four dimensions define this requirement.

Bandwidth (throughput)

This is the "size" of the connection, which prevents saturation. While application data payloads are often small, insufficient bandwidth creates latency and packet loss, which in turn blocks the trigger queue.

  • Recommendation: 15 Mbps per active station.
  • Absolute minimum: 10 Mbps per active station, or the calculated Required Bandwidth, whichever is higher.
  • Quantifiable bound: The minimum bandwidth must exceed the application's Base Data Rate (BDR), multiplied by a Headroom Factor () of at least 2.0 (100%) to account for network overhead and re-transmissions.
    • = Peak Event Rate (events per second)
    • = Average Data Payload per Event (bytes, including all data sent and received by the trigger)
    • = headroom factor (dimensionless) where is recommended

First compute the Base Data Rate (BDR) in bits per second:

Then compute the Required Bandwidth:

In practice, if this calculated value is less than 10 Mbps, you should still provision at least 10 Mbps per active station (and preferably 15 Mbps) to future proof, provide ample margin, and account for other traffic on the same link.

Example

If your app's worst-case peak event rate is and each event sends and receives about , then:

Even though the calculated requirement is low, we still provision at least 10-15 Mbps per station.

Latency (round-trip time)

This is the delay for a packet to travel to the server and back. High latency directly increases the execution time of every network-dependent trigger action (like a table write or connector function), making the entire queue vulnerable to blockage.

  • Recommendation: < 80ms
  • Absolute maximum: 100ms (to your [instance].tulip.co domain).
  • Quantifiable bound: Latency directly determines how long network-dependent actions take inside a trigger. To prevent the trigger queue from backing up, the Total Trigger Execution Time contributed by network actions must be significantly less than the time between events.

Let:

  • = peak event rate (events per second)
  • = time between events (seconds).
  • = number of sequential network actions in the trigger (e.g., table writes, connector calls)
  • = average round-trip latency (seconds)

The time between events is:

A simple approximation for the trigger time contributed by network latency is:

To maintain a healthy queue, we require that the network-driven trigger time is comfortably below the time between events.
A practical rule of thumb is:

In words: the sum of all sequential network round-trips in a trigger should be less than a quarter the expected time between operator events at peak load. If this condition is not met, the queue will gradually back up and eventually reach the 100-event limit. For the most reliable performance, consider keeping

Jitter (latency variation)

This is the variation in latency. A high-jitter connection is unstable; a sudden latency spike on a single packet can hang a trigger just as effectively as high average latency, blocking the queue and leading to "random" stalls and dropped events.

  • Quantifiable Bound: Jitter is the variation in latency over time. A single latency spike can stall a trigger just as effectively as consistently high latency.

Let:

  • = average round-trip latency over a sampling window (seconds)
  • = maximum observed round-trip latency over the same window (seconds)

We define:

This value must be kept low to ensure predictable trigger performance. For Tulip deployments:

  • Recommendation: < 10ms
  • Absolute maximum: < 30ms

If measured jitter approaches or exceeds 30ms, operators will see "random" stalls and triggers may remain in a running state long enough to block the queue.

Reliability (packet loss and uptime)

This is the most critical dimension for data integrity. Packet loss forces re-transmissions, which can stall a trigger for seconds, causing severe queue clogs. Connection disconnections (uptime failures) are catastrophic, leading to a full queue (100 events) and the permanent loss of all subsequent data, guaranteeing incomplete records.

  • Recommendation: 0% Packet Loss and Persistent Uptime.
  • Absolute maximum: < 0.1% Packet Loss.
  • Quantifiable bound:
    The trigger queue can hold only a limited number of events. When the network is down, operator actions continue to generate events until the queue is full. Once the queue reaches its limit, all subsequent events are dropped and permanently lost.

Let:

  • = maximum queue size (events). Currently by default.
  • = peak event rate during an outage (events per second).

The maximum tolerable outage duration before events start to be dropped is:

Example: For an app with a high event rate of events/sec (e.g., rapid torque values):

  • seconds

In this example, any continuous network outage longer than 50 seconds will cause data loss: every event after the first 100 will be dropped and never processed, even after the network recovers.

This simple relationship is crucial for risk assessment: high-event-rate apps tolerate only very short outages before data integrity is compromised.

Physical infrastructure recommendations

Meeting the quantifiable network requirements is not just about your ISP; it depends heavily on your facility's internal physical network infrastructure.

Default recommendation: wired Ethernet

To achieve the stability, reliability, and low-latency metrics defined in the previous section, a wired Ethernet connection is the strong, default recommendation for all stationary Tulip Player stations.

However, this is not a "set it and forget it" solution. The physical layer is a common and often overlooked point of failure that can cause intermittent, hard-to-diagnose packet loss.

  • Physical cable integrity: IT and Operations teams must proactively manage this infrastructure. Regularly inspect cables for faultiness, frays, or loose connections. A common source of failure is a cart repeatedly rolling over an unshielded or poorly routed cable.
  • Electromagnetic interference (EMI): On a factory floor, high-vibration machinery, motors, or high-voltage equipment can cause significant interference. Use shielded Ethernet cables (e.g., STP, Cat6a, or higher) in these environments to protect against signal degradation.

Considerations for WiFi and mobile devices

We recognize that many customers use mobile devices (tablets, phones) or mobile carts where a wired connection is not feasible. In these scenarios, a stable WiFi connection is a prerequisite for data integrity.

  • Network design: You must ensure a robust, professionally designed wireless network. The goal is seamless coverage with no coverage gaps (which cause disconnects) and minimal access point (AP) overlap. Excessive overlap can cause a device to "hunt" between APs, forcing a DHCP renegotiation and causing a network stall that blocks the trigger queue.
  • Interference: WiFi is also highly susceptible to EMI from machinery. Your network design must account for this.
  • Increased risk: Because a wireless connection is inherently less stable than a wired one, it is even more critical for applications on these devices to follow the Resilient App Architecture recommendations (like decoupling logic) to protect against the inevitable network blips.
  • Mobile OSes may aggressively suspend or throttle background apps to save battery. Tulip Player should be exempted from such policies while in use (kiosk/locked-app mode is preferred).

Physical infrastructure checklist

Wired stations

  • Cable type: All stationary Tulip Players use shielded Ethernet (STP, Cat6a, or better) in high-EMI areas (motors, drives, welders, etc.).
  • Cable routing: No cables run where carts, forklifts, or chairs regularly roll over them; cables are routed through trays, conduits, or overhead where possible.
  • Cable integrity: There is a documented inspection schedule (e.g., quarterly) to check for frays, kinks, crushed jackets, and loose connections.
  • Terminations: All connectors and patch panels are properly terminated and strain-relieved; no "temporary" patch cables are used as permanent runs.
  • Switch ports: Player ports are locked to appropriate speed/duplex, and not sharing over-subscribed uplinks that regularly saturate.
  • Power / PoE: If using PoE, switch power budget is not oversubscribed; brownouts or reset events are monitored and alarmed.
  • Monitoring: Basic switch monitoring (CPU, memory, interface errors, and utilization) is enabled and periodically reviewed.

WiFi and mobile devices

  • Professional design: The wireless network has been professionally designed (or at least surveyed) for the production area where Tulip devices operate.
  • Coverage: Measured signal strength along the full operator path (the "walk route") shows continuous coverage with no dead zones that force disconnects or roaming "free-falls."
  • AP overlap: Access points are placed to minimize excessive overlap; clients do not "hunt" between APs while stationary.
  • Roaming behavior: Roaming thresholds and band steering are tuned so that devices switch APs smoothly without repeated DHCP renegotiations or short drops.
  • Interference: Known sources of EMI (machinery, welders, large motors, microwave links, etc.) have been considered in channel planning.
  • Segmentation: Tulip devices are on a dedicated SSID/VLAN for production systems, not mixed with guest WiFi or high-churn consumer devices.
  • Backhaul: APs have reliable, non-congested uplinks; controller or cloud-managed WiFi is not operating near capacity.
  • Resilient apps: Apps running on WiFi/mobile devices follow Resilient app architecture more strictly to account for the inherently higher risk of wireless connectivity.

Test

Walk a route with a Player running networkCheck and your own diagnostics and ensure no disconnects or spikes beyond threshold for 10 minutes.

Logical and security infrastructure (proxies, firewalls, and DPI)

Beyond physical cables, your logical network security infrastructure is the most common source of "mystery" latency, jitter, and connection failures.

  • Proxies and firewalls as a bottleneck: Security appliances (Firewalls, Proxies, CASB, etc.) that perform Deep Packet Inspection (DPI) or SSL/TLS Decryption on all traffic are a primary source of instability. This inspection process itself adds significant, variable latency (jitter) to every single network request, which can easily violate the <100ms latency and <30ms jitter requirements.

  • Monitor appliance health: Network teams must monitor the CPU and memory utilization of these appliances. An overloaded firewall or proxy will drop packets or introduce latency, creating the exact instability that leads to data loss.

  • Do not meddle with Tulip traffic: Tulip's client-to-cloud communication relies on specific web protocols, including secure WebSockets, for real-time data transfer. Some security solutions attempt to intercept, rewrite, or "meddle" with this HTTPS traffic. This interception can break the WebSocket connection or incorrectly re-flag standard API calls as cross-origin (CORS) requests, causing them to fail.

  • Requirement: All Tulip domains (listed in the Allowlist section) must be configured for SSL/TLS pass-through. The traffic should be bypassed from any DPI, protocol inspection, or caching.

Checklist

  • Tulip domains bypassed: All Tulip domains listed in the Allowlist are configured for SSL/TLS pass-through and are excluded from SSL decryption / interception.
  • Prefer no DPI, no protocol rewriting: Deep Packet Inspection (DPI), HTTP protocol rewriting, and content caching are disabled for Tulip traffic, especially for WebSockets.
  • WebSocket support: Firewalls and proxies allow long-lived WebSocket connections over port 443 without short idle timeouts or forced resets.
  • Capacity monitoring: CPU, memory, and throughput on security appliances are monitored; alerts are in place for sustained high utilization that could introduce latency or packet loss.
  • Egress rules: Outbound rules explicitly allow Tulip IP ranges / hostnames (per garden identifier), including asset storage endpoints (S3/Blob).
  • Fail-open behavior: Where possible, Tulip traffic is configured to fail open (or with clear, logged errors), not silently drop packets.
  • Change control: Any change to firewall/proxy configuration that could affect Tulip is tracked and, ideally, tested against https://your-account.tulip.co/networkCheck during change windows.

Resilient app architecture

This section explains how developers can mitigate infrastructure risks.

Developers must design applications to be resilient to intermittent network issues. Ensuring headroom for network latency and minor platform degradation is critical. Use automations (and/or functions where possible.

  • Prioritize Tulip table writes: For all critical data capture, use native Tulip Table "Create Record" or "Store" actions first in your trigger. These actions are more resilient to network blips and have built-in retry logic (up to 2 minutes in LTS15).
  • Minimize event rate: Review your app design. If an operator can reasonably generate 100 events (e.g., scans) during a 2-minute network drop, your architecture is at high risk. Consider batching inputs or simplifying the workflow.
  • Follow performance best practices: Adhere to the documented guidance on for high performance apps here. Overwhelming the platform with high-frequency calls creates internal latency that mimics a poor network.

See also: Event-Based App Triggers via ‘Virtual’ Tulip Machine

Checklist

  • Tables first: All critical data capture (scans, operator decisions, measurements) is written first to a Tulip Table.
  • No long API calls in operator triggers: Operator-facing triggers do not contain long-running Connector Functions or complex external API logic.
  • Decoupling pattern: Automations are preferred for making external API calls where possible.
  • Status fields: Processing tables use clear status fields (pending, processing, processed, error) so failures can be retried or investigated.
  • Controlled event rate: App design does not allow operators to generate sustained event rates that would fill the queue during short outages (see outage math above).
  • Timers and loops: Timer triggers and loops are not firing at unnecessarily short intervals (e.g., every 1-5 seconds) unless absolutely required and validated.
  • Connector error handling: Connector Functions that can fail due to network issues have error handling and retry mechanisms (preferably via Automations, not operator triggers).
  • Performance best practices: Apps adhere to Tulip's best practices for high performance apps.

Technical configuration and troubleshooting

If your network does not meet the quantifiable requirements outlined above, or if you are experiencing performance issues that you suspect are network-related, please follow the steps in our detailed guide. This guide provides a prioritized flowchart for diagnosing and resolving network performance issues, from foundational checks to advanced packet-level analysis.

See the full network troubleshooting guide here.

Network and WebSocket test utility

Tulip makes extensive use of WebSockets, a type of long-lived connection, to enable real-time updates. These WebSockets use SSL-encryption over port 443. Some proxies do not support these connections, and some network monitors may need to explicitly allow these connections. Please review the above section for compatibility.

A simple utility is available to help you test your network connection's latency and WebSocket compatibility:

OR

Commissioning and ongoing monitoring checklist

  • Initial test: For each new station, run https://your-account.tulip.co/networkCheck (or the DMG MORI URL) and your own diagnostics for at least 10-15 minutes during normal production load.
  • Record metrics: Capture average, minimum, and maximum latency, jitter, and any observed packet loss; verify they meet the thresholds in this article.
  • Peak-load test: Repeat the test during a known peak-load period on the factory network (shift-change, large batch processing) to ensure stability under stress.
  • Regression tests after changes: Re-run the test after any significant change to network, WiFi, firewall, or proxy configuration affecting Tulip.
  • Baseline storage: Store test results (screenshots or exports) as part of your validation / commissioning records.
  • Periodic re-verification: Schedule periodic re-tests (e.g., quarterly) for critical stations or when new apps with higher event rates are deployed.

Allowlist

The section below lists all connections Tulip requires to operate as a platform, separated by Tulip component. This list represents a complete list of addresses owned by Tulip.

Users can identify the garden identifier where their Tulip Instance is hosted within the Account Settings menu.

Garden Identifier Inbound Outbound Asset Storage
eu-32*
  • 18.196.208.150
  • 52.29.186.76
  • 3.65.140.161
  • 3.68.164.149
  • 3.69.40.142
  • 3.78.33.159
  • 18.157.121.27
  • 3.123.244.32
  • 52.59.20.237
  • 3.78.105.122
  • 18.193.245.36
  • 3.122.220.199
  • https://s3.eu-central-1.amazonaws.com/co.tulip.factory/
eu-70*
  • 4.175.210.240
  • 4.175.210.241
  • 4.175.210.242
  • 4.175.210.243
  • 20.71.18.62
  • 51.124.87.41
  • https://dmgtulip.blob.core.windows.net/
apac-94*
  • 20.27.132.100
  • 20.27.132.101
  • 20.27.132.102
  • 20.27.132.103
  • 20.78.30.60
  • 20.48.9.84
  • https://dmgjapantulip.blob.core.windows.net/
us-11*
  • 44.227.129.199
  • 44.238.198.162
  • 44.241.118.68
  • 44.231.104.41
  • 52.35.58.248
  • 52.36.233.68
  • 34.212.49.37
  • 52.35.248.97
  • 52.10.204.120
  • 34.213.88.194
  • 52.25.126.217
  • 44.240.150.101
  • https://s3.us-west-2.amazonaws.com/co.tulip.tulip-aws-us-west-2-nonprod-11.factory/
us-14*
  • 20.84.217.148
  • 20.84.217.149
  • 20.84.217.150
  • 20.84.217.151
  • 20.84.216.50
  • https://garden14factory.blob.core.windows.net/
us-15*
  • 3.208.72.216
  • 3.208.72.217
  • 3.208.72.218
  • 3.208.72.232
  • 3.208.72.233
  • 3.208.72.234
  • 3.208.72.210
  • 3.208.72.211
  • 3.208.72.212
  • 3.208.72.229
  • 3.208.72.230
  • 3.208.72.231
  • https://s3.us-east-1.amazonaws.com/co.tulip.factory/
apac-19*
  • 20.210.51.28
  • 20.210.51.29
  • 20.210.51.30
  • 20.210.51.31
  • 20.210.45.100
  • https://garden19factory.blob.core.windows.net/
cn-20*
  • 159.27.126.40
  • 159.27.126.41
  • 159.27.126.42
  • 159.27.126.43
  • 159.27.127.10
  • https://garden20factory.blob.core.chinacloudapi.cn/
cn-21*
  • 71.132.38.120
  • 52.80.236.54
  • 71.131.201.160
  • 52.81.123.14
  • 54.223.195.74
  • 54.223.59.65
  • 71.131.201.138
  • 71.132.24.122
  • 71.132.7.44
  • 54.223.241.238
  • 54.223.51.22
  • 140.179.71.41
  • https://s3.cn-north-1.amazonaws.com.cn/co.tulip.tulip-aws-cn-north-1-prod-21.factory/
usgov-22*
  • 3.30.98.35
  • 3.30.98.36
  • 3.30.98.37
  • 3.30.98.41
  • 3.30.98.42
  • 3.30.98.43
  • 3.30.98.32
  • 3.30.98.33
  • 3.30.98.34
  • 3.30.98.38
  • 3.30.98.39
  • 3.30.98.40
  • https://s3.us-gov-west-1.amazonaws.com/co.tulip.factory/
apac-25*
  • 13.214.78.164
  • 13.250.183.11
  • 52.77.113.195
  • 18.139.21.26
  • 18.143.235.80
  • 13.251.239.252
  • 52.74.23.60
  • 54.251.114.217
  • 18.141.118.107
  • 18.136.228.199
  • 52.220.60.219
  • 122.248.216.79
  • https://s3.ap-southeast-1.amazonaws.com/co.tulip.tulip-aws-ap-southeast-1-eksapac-25.factory/
eu-27*
  • 51.103.12.96
  • 51.103.12.97
  • 51.103.12.98
  • 51.103.12.99
  • 20.74.20.137
  • https://garden27factory.blob.core.windows.net/
us-28*
  • 4.156.65.240
  • 4.156.65.241
  • 4.156.65.242
  • 4.156.65.243
  • 4.156.123.70
  • https://garden28factory.blob.core.windows.net/
apac-30*
  • 52.193.203.7
  • 3.114.39.205
  • 54.238.210.93
  • 54.168.171.117
  • 13.115.103.18
  • 52.192.225.86
  • 52.69.36.14
  • 18.182.234.190
  • 57.181.129.61
  • 52.193.201.211
  • 3.113.145.191
  • 35.79.129.101
  • https://s3.ap-northeast-1.amazonaws.com/co.tulip.tulip-aws-ap-northeast-1-prod-30.factory/
fedramp-32*
us-33*
  • 3.143.190.234
  • 18.189.54.45
  • 3.150.60.241
  • 3.149.138.1
  • 3.141.227.62
  • 3.20.218.71
  • 18.225.12.77
  • 18.189.216.223
  • 3.19.226.227
  • 18.189.239.168
  • 3.150.156.147
  • 13.58.6.116
  • https://s3.us-east-2.amazonaws.com/co.tulip.tulip-aws-us-east-2-prod-33.factory/

Admin interface (Web Browser)

New Requirements

Outgoing access to Custom Widgets:

  • *.tulip-custom-widgets.com
    OR
  • *.dmgmori-tulip-custom-widgets.com

To use Tulip's web interface, we require the following:

Requirements

Outgoing access to Factory:

  • https://your-account.tulip.co/
    OR
  • https://your-account.dmgmori-tulip.com/

Outgoing access to Custom Widgets:

  • *.tulip-custom-widgets.com
    OR
  • *.dmgmori-tulip-custom-widgets.com

Outgoing access to content delivery network (CDN):

Outgoing access for live chat support:

  • https://api-iam.intercom.io/
  • https://nexus-websocket-a.intercom.io/
  • *.zopim.com (port 80 and 443)
    • If wildcards are not allowed, please provide access for the following subdomains:
      • chat-api.zopim.com
      • ccapi-larboard.zopim.com
      • chat-polaris-api.zopim.com
      • chat-polaris-larboard.zopim.com
      • widget-mediator.zopimdashboard-mediator.zopim.com
      • chat-polaris.zopim.com

Outgoing access for onboarding interactions:

Tulip Player

To download, use, and update the Tulip Player, we require the following:

New Requirements

Outgoing access to Custom Widgets:

  • *.tulip-custom-widgets.com
    OR
  • *.dmgmori-tulip-custom-widgets.com

Requirements

Outgoing access to Factory:

  • https://your-account.tulip.co/
    OR
  • https://your-account.dmgmori-tulip.com/

Outgoing access to Custom Widgets:

  • *.tulip-custom-widgets.com
    OR
  • *.dmgmori-tulip-custom-widgets.com

Outgoing access to content delivery network (CDN):

Outgoing access for Player updates:

Tulip Edge Devices

To use and update Tulip's hardware, we require the following:

Requirements
Outgoing access to Factory:

  • https://your-account.tulip.co/
    OR
  • https://your-account.dmgmori-tulip.com/

Outgoing access for Edge Device updates:

Outgoing access for date time synchronization:

  • ntp://[0-3].north-america.pool.ntp.org

Tulip Cloud Connector Host

To use the Tulip Cloud Connector Host to connect to a database, API, or OPC UA server, we require the following:

Requirements

Incoming access to Factory:

  • https://your-account.tulip.co/
    OR
  • https://your-account.dmgmori-tulip.com/

Tulip On Premise Connector Host

To use the self-hosted Docker Tulip Connector Host to connect to a database, API, or OPC UA server, we require the following:

Requirements

Outgoing access to Factory:

  • https://your-account.tulip.co/
    OR
  • https://your-account.dmgmori-tulip.com/

Outgoing access for updates:

Outgoing access for connections:

  • All third-party services to be connected to Tulip.

Revision history (since February 2025)

Area of Revision Date of Revision Revision Summary
11/18/2025 New sections and information
Admin Interface (Web Browser) & IP Allowlist 10/23/2025 Updated IP allowlist based on garden identifiers & removed outdated CSV import requirements
Allowlist / Region: us 9/29/2025 Added "20.84.216.50" to Connector Host addresses in us region
Allowlist / Region: us 9/13/2025 One Connector Host address updated from ‘20.84.217.148/31’ To ‘20.84.217.148/30’
Admin Interface (Web Browser) & Tulip Player 4/17/2025 Improved highlighting of requirement for Custom Widgets
Allowlist / Region: apac 2/27/2025 - Instance & Connector Host: Additional I.P. addresses.
- Additional URL in Asset Storage
Admin Interface (Web Browser) & Tulip Player 2/12/2025 New requirement for Custom Widgets

Did you find what you were looking for?

You can also head to community.tulip.co to post your question or see if others have faced a similar question!