MENU
    Overview of On-Premise Connector Hosts
    • 04 Apr 2025
    • 6 Minutes to read
    • Contributors

    Overview of On-Premise Connector Hosts


    Article summary

    LTS12+ OPCH Networking Changes

    With LTS12, the way OPCH communicates with the Tulip Cloud was changed. These changes were necessary to ensure the OPCH could support massive deployments of connectors and machines.

    With this change, the OPCH moved from maintaining a continual websocket connection with the cloud, to be stateless. This will lead to an increase in RESTful calls between OPCH and the Tulip Cloud. During OPCH upgrades, customers are responsible for ensuring proxies and firewalls can support this added load.

    Purpose

    Learn how to leverage On-Premise Connector Hosts for your integrations.

    Prerequisites

    To learn about Connector Hosts in Tulip, first review this article

    Overview

    This article intends to serve as a point of reference for On-Premise Connector Hosts (OPCH) in Tulip. The Connector Host is a service used to facilitate connections from Tulip to external web services, databases, and OPC UA servers. All Tulip instances have a Cloud Connector Host by default.

    There are several considerations to make when determining if an On-Premise Connector Host is the correct architecture fit.

    Key Considerations for On-Premise Connector Host

    The considerations for an On-Premise Connector Host can be broken down into a few categories:

    1. Networking
    2. Infrastructure management
    3. Performance

    Networking

    The most common rationale for deploying an On-Premise Connector Host is for the advantages it offers when connecting to systems hosted within a local network. With the on-premise offering, all connections from Tulip to external systems start from within your local network. All connections from your network are outbound to Tulip via a secure WebSocket.

    This contrasts with Cloud Connector Hosts, which require inbound access to the services. This is typically an IT decision to allow inbound secure WebSocket connections from Tulip's cloud to the service, often times using port forwarding rules on the WAN router/firewall.

    Infrastructure Management

    To deploy an On-Premise Connector Host, there are several infrastructure components that the customer is responsible for. Below is a basic roles and responsibilities matrix:

    TulipCustomer
    Provide technical resources on OPCHX
    Virtual machine hosting and deploymentX
    Virtual machine monitoring and updatesX
    Generating OPCH credentialsX
    Deploying OPCHX
    Updating OPCHX
    Monitoring OPCHX
    Troubleshooting OPCHXX

    The customer will ideally be comfortable with the technologies they use to deploy the Connector Host, as well as using technologies like Docker for container management.

    Deploying an On-Premise Connector Host

    Technical Standards

    OPCH Performance

    The amount of resources needed to run OPCH will increase as its usage increases. If you consistently use it beyond 250Hz of throughput, we strongly advise allocating more resources to your virtual machines to ensure optimal performance.

    When the decision is made to deploy an on-premise solution, Tulip recommends a self-service route using a distributed Docker image. The easiest way to to accomplish this would be to use a virtual machine with a distribution of Linux (Ubuntu is preferred).

    Tulip also recommends to host only one On-Premise Connector Host per virtual machine to avoid a single point of failure for sites.

    Virtual machine requirements:

    • RAM - 4 GB
    • ROM - 8-16GB disk size
    • CPU - 2 core
    • Docker version - 20.10+

    For networking requirements, the On-Premise Connector Host has the following:

    • An IP address
    • DNS resolution to <your-instance.tulip.co>
    • Outbound access on port 443 to Tulip (IPs listed here)
    • Outbound access to the Docker repository here
    • Outbound access to all relevant external systems with ports

    Review the complete list of network requirements here

    Requesting Credentials

    Reach out to Tulip Support (support@tulip.co) to request On-Premise Connector Host credentials using the following template, filling in any details enclosed in brackets.:

    Hello,
    
    This is a request to create a new On-Premise Connector Host.
    
    Tulip instance: <your-instance.tulip.co>
    OPCH name: <CompanyName>-<InstanceName>-<OptionalIdentification>-CH
    Plain text

    Tulip will create and share credentials through a secure, temporary password link. Details should be transferred to an internally managed credential storage and include the following:

    • Factory
    • UUID
    • Machine Secret
    Reusing Credentials

    On-Premise Connector Host credentials should not be used to create more than one Connector Host - this would result in connectivity problems for all hosts sharing credentials.

    Available On-Premise Connector Host Versions (Tags)

    Version Compatibility

    OPCH must be kept up-to-date with the Tulip product. More information.

    Tulip uses Docker image tags to version Connector Host images. Below is a list of actively supported On-Premise Connector Host tags that can be used in conjunction with Docker run and pull commands.

    LTS VersionBiweekly VersionMost Recent OPCH Tag
    LTS11r262 - r274lts11.7
    LTS12r275-r287lts12.10
    LTS13r288-r307lts13.4
    LTS14r308+lts14

    Deployment

    The following section outlines how to deploy an On-Premise Connector Host in a variety of environments. AWS and Azure both offer container services capable of running the Docker image.

    Pre-LTS12 OPCH

    Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively

    • Azure:

      az container create \
        -g <NAME OF THE RESOURCE GROUP IN AZURE> \
        --name <NAME FOR THE CONTAINER> \
        --cpu 2 \
        --memory 3 \
        --restart-policy Always \
        --image bckca2dh98.execute-api.us-east-1.amazonaws.com/public/connector-host:<TAG> \
        -e TULIP_UUID='<UUID>' \
        TULIP_FACTORY='https://<YOUR SITE>.tulip.co' \
        TULIP_MACHINE_SECRET='<SECRET>' \
        TULIP_DEVICE_TYPE='onprem' \
        CONNECTORS_HTTP_PROXY='' \
        CONNECTORS_HTTPS_PROXY=''
      Plain text
    • Linux VM:

    Pre-LTS12 OPCH

    Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively

    ```
    docker run -d \
        --name tulip-connector-host \
        -e TULIP_FACTORY='https://<FACTORY>.tulip.co' \
        -e TULIP_UUID='<UUID>' \
        -e TULIP_MACHINE_SECRET='<SECRET>' \
        -e TULIP_DEVICE_TYPE='onprem' \
        -e CONNECTORS_HTTP_PROXY='' \
        -e CONNECTORS_HTTPS_PROXY='' \
        -e EXIT_ON_DISCONNECT=true \
        --restart=always \
        --net=host \
        --mount type=volume,source=tuliplog,target=/log \
       bckca2dh98.execute-api.us-east-1.amazonaws.com/public/connector-host:<TAG>
    ```
    Plain text

    Upgrading an On-Premise Connector Host

    Version Compatibility

    OPCH must be kept up-to-date with the Tulip product. More information.

    Tulip releases updates to the On-Premise Connector Host in accordance to our long term support (LTS) release schedule. To upgrade the service, following the below instructions:

    The upgrade process for an OPCH will result in downtime while the pod is stopped and recreated.

    1. Obtain the latest version of the On-Premise Connector Host Docker image.

      docker pull bckca2dh98.execute-api.us-east-1.amazonaws.com/public/connector-host:<TAG>
      Plain text
    2. Run the below command to get the Docker container ID.

      docker ps
      Plain text
    3. If you have access to the TULIP_FACTORY, TULIP_UUID, and TULIP_MACHINE_SECRET, go to step 4. If not, run the following command and store the output of this command in a secure location.

      docker exec <container-id> env
      Plain text
    4. Stop the existing Docker container.

      docker stop <container-id>
      Plain text
    5. Remove the existing Docker container.

      docker rm <container-id>
      Plain text
    6. Run the standard Docker run command leveraging the set of credentials stored.

    Pre-LTS12 OPCH

    Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively

    ```
    docker run -d \
        --name tulip-connector-host \
        -e TULIP_FACTORY='https://<FACTORY>.tulip.co' \
        -e TULIP_UUID='<UUID>' \
        -e TULIP_MACHINE_SECRET='<SECRET>' \
        -e TULIP_DEVICE_TYPE='onprem' \
        -e CONNECTORS_HTTP_PROXY='' \
        -e CONNECTORS_HTTPS_PROXY='' \
        -e EXIT_ON_DISCONNECT=true \
        --restart=always \
        --net=host \
        --mount type=volume,source=tuliplog,target=/log \
       bckca2dh98.execute-api.us-east-1.amazonaws.com/public/connector-host:<TAG>
    ```
    Plain text
    1. Confirm the new Docker container is active.

      docker ps
      Plain text

    Additional References

    Enabling Log-Rotations for Docker

    For existing On-Premise Connector Hosts that are not using Docker log-rotations, follow the instructions documented here to ensure disk-space is properly maintained.


    Did you find what you were looking for?

    You can also head to community.tulip.co to post your question or see if others have faced a similar question!


    Was this article helpful?