- Print
With LTS12, the way OPCH communicates with the Tulip Cloud was changed. These changes were necessary to ensure the OPCH could support massive deployments of connectors and machines.
With this change, the OPCH moved from maintaining a continual websocket connection with the cloud, to be stateless. This will lead to an increase in RESTful calls between OPCH and the Tulip Cloud. During OPCH upgrades, customers are responsible for ensuring proxies and firewalls can support this added load.
Purpose
Learn how to leverage On-Premise Connector Hosts for your integrations.
Prerequisites
To learn about Connector Hosts in Tulip, first review this article
Overview
This article intends to serve as a point of reference for On-Premise Connector Hosts (OPCH) in Tulip. The Connector Host is a service used to facilitate connections from Tulip to external web services, databases, and OPC UA servers. All Tulip instances have a Cloud Connector Host by default.
There are several considerations to make when determining if an On-Premise Connector Host is the correct architecture fit.
Key Considerations for On-Premise Connector Host
The considerations for an On-Premise Connector Host can be broken down into a few categories:
1. Networking
2. Infrastructure management
3. Performance
Networking
The most common rationale for deploying an On-Premise Connector Host is for the advantages it offers when connecting to systems hosted within a local network. With the on-premise offering, all connections from Tulip to external systems start from within your local network. All connections from your network are outbound to Tulip via a secure WebSocket.
This contrasts with Cloud Connector Hosts, which require inbound access to the services. This is typically an IT decision to allow inbound secure WebSocket connections from Tulip's cloud to the service, often times using port forwarding rules on the WAN router/firewall.
Infrastructure Management
To deploy an On-Premise Connector Host, there are several infrastructure components that the customer is responsible for. Below is a basic roles and responsibilities matrix:
Tulip | Customer | |
---|---|---|
Provide technical resources on OPCH | X | |
Virtual machine hosting and deployment | X | |
Virtual machine monitoring and updates | X | |
Generating OPCH credentials | X | |
Deploying OPCH | X | |
Updating OPCH | X | |
Monitoring OPCH | X | |
Troubleshooting OPCH | X | X |
The customer will ideally be comfortable with the technologies they use to deploy the Connector Host, as well as using technologies like Docker for container management.
Deploying an On-Premise Connector Host
Technical Standards
The amount of resources needed to run OPCH will increase as its usage increases. If you consistently use it beyond 250Hz of throughput, we strongly advise allocating more resources to your virtual machines to ensure optimal performance.
When the decision is made to deploy an on-premise solution, Tulip recommends a self-service route using a distributed Docker image. The easiest way to to accomplish this would be to use a virtual machine with a distribution of Linux (Ubuntu is preferred).
Tulip also recommends to host only one On-Premise Connector Host per virtual machine to avoid a single point of failure for sites.
Virtual machine requirements:
- RAM - 4 GB
- ROM - 8-16GB disk size
- CPU - 2 core
- Docker version - 20.10+
For networking requirements, the On-Premise Connector Host has the following:
- An IP address
- DNS resolution to <your-instance.tulip.co>
- Outbound access on port 443 to Tulip (IPs listed here)
- Outbound access to the Docker repository here
- Outbound access to all relevant external systems with ports
Review the complete list of network requirements here
Requesting Credentials
Reach out to Tulip Support (support@tulip.co) to request On-Premise Connector Host credentials using the following template, filling in any details enclosed in brackets.:
Tulip will create and share credentials through a secure, temporary password link. Details should be transferred to an internally managed credential storage and include the following:
- Factory
- UUID
- Machine Secret
On-Premise Connector Host credentials should not be used to create more than one Connector Host - this would result in connectivity problems for all hosts sharing credentials.
Available On-Premise Connector Host Versions (Tags)
OPCH must be kept up-to-date with the Tulip product. More information.
Tulip uses Docker image tags to version Connector Host images. Below is a list of actively supported On-Premise Connector Host tags that can be used in conjunction with Docker run
and pull
commands.
LTS Version | Biweekly Version | Most Recent OPCH Tag |
---|---|---|
LTS11 | r262 - r274 | lts11.7 |
LTS12 | r275-r287 | lts12.10 |
LTS13 | r288-r307 | lts13.4 |
LTS14 | r308+ | lts14 |
Deployment
The following section outlines how to deploy an On-Premise Connector Host in a variety of environments. AWS and Azure both offer container services capable of running the Docker image.
AWS:
- Use the Web UI and this instruction set: https://aws.amazon.com/getting-started/hands-on/deploy-docker-containers/
Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively
Azure:
Linux VM:
Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively
Upgrading an On-Premise Connector Host
OPCH must be kept up-to-date with the Tulip product. More information.
Tulip releases updates to the On-Premise Connector Host in accordance to our long term support (LTS) release schedule. To upgrade the service, following the below instructions:
The upgrade process for an OPCH will result in downtime while the pod is stopped and recreated.
Obtain the latest version of the On-Premise Connector Host Docker image.
Run the below command to get the Docker container ID.
If you have access to the
TULIP_FACTORY
,TULIP_UUID
, andTULIP_MACHINE_SECRET
, go to step 4. If not, run the following command and store the output of this command in a secure location.Stop the existing Docker container.
Remove the existing Docker container.
Run the standard
Docker run
command leveraging the set of credentials stored.
Prior to LTS12, the environment variables CONNECTORS_HTTPS_PROXY and CONNECTORS_HTTP_PROXY must be replaced with HTTPS_PROXY and HTTP_PROXY, respectively
Confirm the new Docker container is active.
Additional References
Enabling Log-Rotations for Docker
For existing On-Premise Connector Hosts that are not using Docker log-rotations, follow the instructions documented here to ensure disk-space is properly maintained.
Did you find what you were looking for?
You can also head to community.tulip.co to post your question or see if others have faced a similar question!