Visual inspection is a key aspect of maintaining quality of production. It's used throughout the manufacturing domains to decrease the amount of defective parts and increase the overall yield of any process. On the other hand, visual inspection is difficult to implement while staying lean, since the costs for dedicated human inspection resources are high. Manual visual inspection is also a high-churn operation, since it's very repetitive and can be visually challenging.

To that end, automatic visual inspection with cameras and machine learning was created and completely changed the tune. Visual inspection algorithms based on machine learning have advanced to the point where they surpass human performance, in both time and accuracy.

Using Tulip you can implement automatic visual inspection by connecting your Vision camera outputs to a first-in-class visual anomaly detection cloud service. Amazon's Lookout for Vision is one such service, which offers a powerful algorithm at a simple REST API endpoint that can be easily integrated to Tulip.

In this article we show you how to set things up quickly on AWS with Cloud Formation as well as within Tulip with a pre-bundled Library App. You can get started with machine learning based visual inspection on your shop floor within minutes, and start automating your processes to reduce manual workload. See the following video for a live tutorial on this process.

Prerequisites

Before beginning this process, please make sure that you have the following resources available:

Setting up Lookout for Vision in the AWS Console

To assist in setting up the required cloud infrastructure on the AWS console, we provide a Cloud Formation Template ([download link]). The CFT is a short script that provisions everything needed on the AWS cloud to train and run detection models from Lookout for Vision (LfV) with Tulip. It will create a LfV project, some AWS Lambda functions to activate the LfV model and an AWS API Gateway to communicate with the functions. Tulip will call the AWS API Gateway endpoints and thus run commands to the LfV model from within Tulip Apps - where the visual inspection is done.

  1. Download the CFT from: ...

  2. Go on Cloud Formation on AWS console

  3. Create a stack and upload the CFT file

  4. Run the CloudFormation stack creation script

  5. Note the outputs

    Write down, or note otherwise, the apiGatewayInvokeURL value, as we will be using it again shortly to set up our connector functions.
    Also write down the API Key from the "Resources" tab. You will need to click on it and find the value.

Now that AWS was provisioned by Cloud Formation, we can go ahead and train a visual machine learning model with Lookout for Vision. To that end we must have an annotated dataset of images. While you could create the dataset manually, we offer an easy way to collect data with Tulip using our Data Collection app [link]. The data is stored in a Tulip Table and easily exportable from it. We will assume that a dataset was collected in Tulip for the next few steps.

  1. Export dataset from Tulip Table

  2. Upload the dataset to the S3 bucked created by Lookout for Vision

  3. Create a dataset on Lookout for Vision and select the S3 bucket as a source

    Make sure to select "Automatically attach labels":

Now that a dataset is ready in Lookout for Vision - we can train a model. The process takes roughly 45 minutes, and is an easy process on the AWS console.

  1. Click Train Model

  2. Check on the training status

  3. Review training results

Once the model has finished training, there is no more that we need to do on the AWS side. All the connections have been made for us by the Cloud Formation, and we are clear to connect Tulip to Lookout for Vision.

Setting up Tulip to Call Lookout for Vision Models

Tulip is highly capable of integrating to external services via APIs, which is why it is easy to integrate it with AWS Lookout for Vision. We will use the REST API endpoint from the last section to control the LfV model through Tulip Connector Functions. However, instead of creating the functions from scratch we are providing them pre-built in our "Defect Detection with Lookout for Vision" Library App. This will save you a lot of time, and make sure the connectors are built correctly. First, however, We will need to populate the connector function with the right data from your specific AWS account.

  1. Find the Connector Functions in Tulip

  2. For the connector set the correct endpoint URL from AWS, which you have copied earlier from Cloud Formation:

  3. For each function set the API KEY from AWS

    Make sure you do this for all 4 functions.

Using Lookout for Vision in Tulip Apps

In the provided Defect Detection app from the library we have set up a very rudimentary process for running visual inspection. You can modify the app or copy parts of it to other apps to suit your needs. To use the app, you will need to make some small modifications to work with you specific Tulip Vision setup. You'd need to point the capture trigger at your Vision Camera Configuration, as well as the camera preview widget for visual feedback.

  1. Set up the Tulip App camera widget with the right camera configuration

  2. Set up the "Detect Anomalies" trigger with the right camera configuration

Now all is set for running your model on Tulip. Before the model is available for inference (evaluating an image) it needs to be "Hosted", which means it will occupy some cloud computation resource, like a cloud virtual machine, to serve the model to your app. We have created button on the provided Tulip App to do that.

Note: Once the model is "Hosted" it is consuming resources that have a cost attached to them. Remember to turn off your models whey they are not in use, or you are risking spending money for no return. AWS LfV pricing can be learned about in here. Keep in mind AWS also offer a Free Tier for LfV. Tulip is not responsible for resources hosted privately on AWS, but you are welcome to contact us with inquiries.

Go ahead and run the Defect Detection app on the Player PC with the connected inspection camera. The following steps are done on the running app:

  1. Turning the model ON ("Hosting")

    Note that once you start the model it will begin incurring costs. Remember to turn it off.

  2. Check for the model hosted status

    Look for the "HOSTED" status. While the status is not "HOSTED" the model will not accept queries.

  3. Running an inference request on the model

The model should now be making predictions on whether the object is a defect or a normal part. We have added buttons to give feedback on the detection accuracy in case the model makes the wrong prediction. These buttons will add more data to the Tulip Table and your dataset. Periodically re-train the model by providing it with more sample images, to increase the robustness of your model.

Conclusions

We have seen an easy to follow process for getting started with machine learning based automatic visual inspection on Tulip. This process can save you on manual resources for visual inspection done by a person. Applying this process also inherently gives you a visual dataset of all the defects that arise in your product, which you may use in another way than inspection.

Machine learning is easily applied and implemented with Tulip using its connectivity features. Other cloud ML services can be used in a very similar way. Check back with Tulip to learn about other ML features.

Further Reading

  • Vision Snapshot feature

  • Connector functions guide

Did this answer your question?