Visual inspection is an important part of frontline operations. It can be used to ensure only high quality products are leaving the line, reduce returned parts and re-work, and increase the true yield. Automatic visual inspection can save up on assigning manual labor to perform visual inspection, reducing overall costs and increasing the efficiency of others. With Tulip Vision, visual inspection can be added to any workstation with speed and ease, by connecting an affordable camera to an existing computer and building a Tulip App for inspection.

In this article we will demonstrate on how to use Microsoft Azure's service for visual inspection with Tulip. The service is a no-code online service for easily creating machine learning models for visual recognition tasks. With Tulip you can collect data for training the machine learning models that offers.


Example Visual Inspection Setup

Uploading the inspection images to

From the dataset Tulip Table click "Download Dataset" and select the relevant columns for image and annotation. Download and unzip the dataset .zip file to a folder on your computer. It should have a number of subfolders per each category of detection according to the annotation in the dataset table.

Create a new project on

Name your project and select the "Classification" Project Type and "Multiclass (Single tag per image)" Classification Type options: (these options are selected by default)

Click "Add Images"

Select the images on your computer for one of the classes. You can select all the images in the subfolder you got from the Tulip Table unzipped dataset. Once the images are loaded in you can apply a tag to all of them at once, to save on tagging them one-by-one. Since all the current images are from the same class, this is possible.

In the following example we upload all the "Normal" class images and apply the tag (class) to all of them at once:

Repeat the same upload operation for the other classes.

Training and publishing a model for visual inspection

Once the data for training is in place, proceed to train the model. The "Train" model on the top right corner will open the training dialog.

Select the training mode appropriately. For a quick trial run to see everything is working properly use the "Quick" option. Otherwise for best classification results use the "Advanced" option.

Once the model is trained, you will be able to inspect its performance metrics, as well as publish the model so it's accessible via an API call.

Select the proper resource for the publication and continue.

At this point your published model is ready to accept inference requests from Tulip. Take note of the publication URL as we will be using it shortly to connect from Tulip.

Widget for making inference requests to the published model

Making inference requests to Azure service can be done on Tulip by using a Custom Widget. Go on the Custom Widgets page under Settings.

Create a new Custom Widget and add the following inputs:

For the code parts use the following:


<button class="button" type="button" onclick="detectAnomalies()">Detect Anomalies</button>


Note: Here you will need to get the URL and prediction-key from the published model.

function b64toblob(image) {
const byteArray = Uint8Array.from(window.atob(image), c => c.charCodeAt(0));
return new Blob([byteArray], {type: 'application/octet-stream'});

async function detectAnomalies() {
let image = getValue("imageBase64String");
const url = '<<< Use the URL from>>>';
url: url,
type: 'post',
data: b64toblob(image),
processData: false,
headers: {
'Prediction-Key': '<<< Use the prediction key >>>',
'Content-Type': 'application/octet-stream'
success: (response) => {
setValue("predictions", response["predictions"]);
error: (err) => {
async: false,


.button {
background-color: #616161;
border: none;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
width: 100%;

Your custom widget should look like the following:

Using the prediction widget in a Tulip app

Now the widget is set up you can simply add it to an app in which you will be running the inference requests. You may construct an app step as the following:

Use a regular button to take a snapshot from the visual inspection camera and save it in a variable:

Use the "Detect Anomalies" custom widget:

Configure the widget to accept the snapshot image variable as a base64string:

Also assign the output to a variable to display on screen or use otherwise:

Your app is now ready to run inference requests for visual inspection.

Running the visual inspection app

Once your app is ready, run it on a Player machine with the inspection camera you used for data collection. It is important to replicate the same situation you used for data collection as for inspection inference, to eliminate any error from variance in lighting, distance or angle.

Here is an example of a running visual inspection app:

Further Reading

Did this answer your question?