Interested? Reach out to firstname.lastname@example.org or the Chat in the bottom right and we will get you this application imported to your site.
It is very common that an operation would be interested in monitoring the performance of their workcenters in real time. A workcenter can be a machine, assembly cell, or any area that is expected to process work and produce value. They can exist in different statuses over time (Running, Setup, Down, etc.) and process jobs or work orders. It is fairly common for production staff to track the output of cells using a tool called an “hourly scorecard”, which displays cell performance against target conditions on an hourly basis. The purpose of this app is to provide a structure that can be used and scaled across your organization that digitizes these common practices, while also providing insight into reasons for downtime, defects, and the progression of work through your factory.
General Process Overview
The application takes place from this main screen, where all of the necessary functions exist as buttons or pop-ups. In this screen you can view your current status, performance metrics, current job and job queue. You also have the ability to add new jobs, begin existing jobs, log parts against the job, complete the job, change your station status, and log defects. All of this information is stored in Tulip tables can be viewed or interacted with from other applications or dashboards.
When the application is first imported to your instance you will need to create the analytics - they don’t come along during the import. You will only need to do this once. You’ll also need to set up a station using the “shop floor” drop down for each workcenter that will use the app. You may also want to adjust the default rate (located in the “Load tables” trigger on the “main screen” step).
Getting Started - Application Overview
Once you’ve set up your station (from the shop floor menu) and created your analytics (see the analytics section, below) you are ready to begin using the application. When you open the application on your station it will create a row in the *Stations table. It is recommended that you upload a picture of your station to the stations table at this time.
With your station running the application, you are now ready to begin loading jobs and tracking production. There won’t be any jobs in the system to begin with. You can ask your associates or planners to enter the job information. You may choose to implement a barcode scanner to expedite this process or even connect the application to an external data source. Once entered into the system, the jobs will be available to begin logging production data against.
Once a job is loaded, the associate can indicate the status of their cell. This should be a very easy task: simply click on the status you’d like to declare. Every 60 seconds the application will write to the database and update the performance metrics. At any time an associate is permitted to add production quantities to their output by pressing the “LOG PARTS” button or declare defects by pressing the “LOG DEFECTS” button. Depending on your situation, you may choose to have this happen automatically through a barcode scanner, foot pedal, or other system. You may also choose to put a single button that increments one unit at a time. Feel free to adjust the application to your needs.
To keep things simple, there is a single variable called “Default rate (s)” that sets the output expectation for a cell. This is set in the “Load tables” trigger on the main page. You may decide to have this rate be loaded on a per-station, per-job, or per-station-job basis. Feel free to adjust where this rate comes from or start by using a default rate that is closer to your actual expectations. When the cell is “OFF” the target rate will not be applied to the cell.
When associates transition their cell into the DOWN state they will be prompted with a form asking them to declare a reason for the downtime. These can be customized to your needs. Once a downtime event is logged it will be captured in the *Andon table. You can use other apps to process these events if you’d like. Similarly, when defects are logged they write to the *MRB table. Each of these tables stores a unique tracking number with every defect or downtime event and can be used for analytics or supporting workflows.
When jobs are completed they can be passed to other stations for processing or completed entirely. All of the history of a job can be found in the “*Job History” table. This table will allow you to see the progression of jobs throughout your facility and allow you to capture things like total lead time, waiting time, and processing time.
The core of the application is based on timers that operate from the master layout. These timer triggers are initiated sequentially, top-to-bottom, and populate the *Status History table. As long as the application is open it will be printing to this table and is the source of the production tracking data.
The hour check timer checks to see if you have entered a new hour of production. This limits the number of rows that we need to store and makes analytics run more simply. Calculating duration checks the time between the last time that the trigger was run and the current time. This is used to remove large blocks of times that might take place when the application is inadvertently turned off (and isn’t running).
The expected output timer increments your target based on the calculated duration and the default output rate.
The create new entry trigger is the most complicated and important trigger in the application. It checks to see if any number of critical parameters have changed (cell status, hour, or product) and creates a new record if they have. In this way, we are able to provide the production data resolution that we need but also keep the data concise and easy to understand.
Creating the Analytics
Click on the (somewhat difficult to see) “Select from existing” button on the bottom of each analytic from inside the app editor.
Select “Table” from the left side of the screen and choose the “*Status History” table. This is the table that stores all of the performance data of the cell.
Click “Create new analysis”. This is the process you will follow for each of the analytics in the app.
This will show you the output of your station
This will show you your station’s output target
Copy and paste the following into the expression editor:
floor(sum(@*Status History Expected Output ))
This will sum up the number of defects logged by the station
This will sum up the number of hours logged in the “RUNNING” status
Copy and paste the following into the expression editor, note the filter (Status = RUNNING)
round(sum(@*Status History Duration ), 1) + ' hr'
This will sum up the number of hours logged in the “DOWN” status
Copy and paste the following into the expression editor (note the filter STATUS = “DOWN”)
link(round(sum(link(@*Status History Duration , 0)), 1), 0) + ' hr'
This will show the amount of hours that the cell was in the running status relative to all hours that were available (this excludes all hours where the cell is “OFF”).
Copy and paste the following into the expression editor (Note the filter status not equal to OFF)
round((sumfiltered(@*Status History Duration , @*Status History Status = 'RUNNING') / sum(@*Status History Duration )) * 100, 1) + '%'
This will display your quality performance as a yield. Good parts / (Good + Bad parts).
Copy and Paste the following into the expression editor
round((sum(@*Status History Actual Output ) / (sum(@*Status History Actual Output ) + sum(link(@*Status History Defects , 0)))) * 100, 1) + '%'
This will display your performance while in the running status. It tells you whether or not your cell is hitting its target while it is in production, excluding all other statuses.
Copy and paste the following into the expression editor (note the filter Status = RUNNING)
round((sum(@*Status History Actual Output ) / sum(@*Status History Expected Output )) * 100, 1) + '%'
This will display the cell’s performance over time, bucketed by hour. It is located on the hourly scorecard step of the application but you may choose to build this as a separate dashboard application.
Use the following expression to group the analytic by hourly buckets
format_date_tz(date_trunc(@*Status History Time Start , 'hour'), 'MM/D HH:MI A', 'UTC')
Create two analytics, one for Actual and one for Target, using the following expressions
sum(link(@*Status History Actual Output , 0))
round(sum(link(@*Status History Expected Output , 0)))
This will show you your downtime reasons, sorted by the duration of time spent down.
A pareto is a built in chart within analytics. Click on the “display” button and select “Pareto Chart”.
Configure your pareto chart as shown to the left.