In a recent blog post we explored the business value of monitoring due date adherence, highlighting the importance of meeting target delivery dates but also of ensuring that predefined lead times are an accurate reflection of QC test durations. In this post we will drill-down further by considering the value of demonstrated performance analysis as a metric for assessing the quality of master data estimates.
When work requests are sent to a high volume, high throughput environment (such as a QC lab), the Planner must:
- Confirm which activities should be planned for each work request;
- Estimate the duration of each activity to determine how much time will be required for each work request;
- Identify the optimal distribution of resources to meet the delivery commitments.
Where certain work requests can be handled in parallel – for instance, analyzing multiple samples together in one campaign run – the Planner must also:
- Assess the (e.g. equipment) capacity to perform the simultaneous work;
- Select the optimum fill rate to balance volume against speed; and
- Estimate the revised work duration based on that fill rate.
Why demonstrated performance analysis is business critical
Creating an accurate planning and schedule is therefore dependent on good quality master data. Correctly estimating those key parameters is critical to accuracy because errors can quickly accumulate over time, especially when hundreds or thousands of work requests are processed each week.
Errors in master data can be caused by a variety of reasons; for instance, parameter values might be outdated or inaccurate because they were originally estimated:
- Based on an old way of working, for example before the workplace implemented an automated scheduling system like Binocs;
- When a new standard work activity was introduced, using a best-guess derived from similar processes;
- Using old equipment that has since been replaced;
- Based on performance in a different location or with a different team.
Master data errors can lead to situations where:
- A QC test method is configured with a lead time of 4 hours but has a demonstrated cycle time of 8 hours
- Analysts will only be half way through the analysis when their next task is scheduled to begin
- This can quickly cause cascading bottlenecks that impact on future tasks and potentially cause serious delays, requiring the plan to be recalculated.
- A configured lead time has been significantly overestimated
- The relevant planning will be suboptimal as work is completed early and no new tasks are planned for the relevant analyst until later in the day
- This can lead to serious productivity issues, as a half hour overestimate on one method’s lead time can easily aggregate to many hours of lost time per analyst each month.
- A specific device (e.g. an HPLC) has a maximum capacity of 12 test slots for simultaneous analysis and the master data is inaccurately configured
- If the master data expects an average campaign size of 12, planning will be based on the assumption that all runs can wait until sufficient samples have been received to fill all of the device’s slots; if the throughput of the test type is low, however, samples may need to be processed before the device is filled. Planning will therefore underestimate the time required and also potentially cause backlogs in sample processing.
- Conversely, an expected campaign size of 1 may permit samples to be analyzed individually as soon as they arrive but it will also dramatically reduce efficiency and lead to a plan that includes a lot more duplication of work than would realistically be accepted on the shop floor; such an underestimate therefore results in below-capacity planning.
When a range of such errors are distributed across hundreds of different activities, planning optimization can be severely impacted, with associated consequences for productivity and business value.
But it’s not only about inaccurate estimates – comparing actual execution against predefined values can also help to identify specific methods or assets that are associated with outlier performance. This can further support operational excellence, for instance by:
- Flagging bottleneck equipment;
- Highlighting better performance in a specific team or lab to help identify improvements that can be rolled-out to less-performant groups.
Improving performance by monitoring master data
The key to maintaining good quality master data and efficient operations is therefore performance analysis: monitoring actual performance and regularly comparing average observed values against what has been configured. Some useful metrics that labs can use to gauge the accuracy of their master data include:
- Duration difference between actual vs configured performance
- Absolute total values (for instance, the aggregated difference in hours across all runs) can help to identify specific methods, activities, teams or even analysts with a high delta between expected lead time and actual cycle time
- Relative average values (indicating the direction of difference) can help to identify activities within a method that are consistently taking more/less time than expected
- Campaign size difference between actual and configured performance
- Absolute total values (for instance, the aggregated difference in campaign size across all runs) can help to identify specific methods, activities, teams or even analysts with a high delta between expected and actual samples per campaign
- Relative average values (indicating the direction of difference) can help to identify methods that are consistently using more/fewer campaign slots per run than expected
- Total number of runs for each period
- This is crucial for assessing the impact of any difference between expected and actual values – if a large per-test delta is identified but the relevant method is only performed twice per month, investigating it might be a lower priority than a method with a small delta but which is performed multiple times daily.
By staying on top of master data and ensuring both that estimates are accurate and that actual performance is efficient, labs can significantly streamline their planning and scheduling processes.
How Binocs can help
Are you looking for ways to reduce costs associated with your QC operations? Binocs comes equipped with a range of standard KPI dashboards that have been designed to help labs stay on top of their performance metrics, identify areas of improvement and take actin. You can use our Business Case Calculator to get a ballpark figure estimation of just how much costs reduction you could achieve, taking into account your unique lab parameters.
Other related content
KPI dashboards in QC are crucial as they enable lab personnel to see relationships between different metrics and make informed decisions about how to optimize lab performance. Based on…Read more
Exploring the value of lab KPIs: asset utilization metricsContinuing our series of monthly blogs on the value of different lab KPIs, this post explores asset utilization metrics. In contrast with the previous posts that presented quantitative measures…Read more
Exploring the value of lab KPIs: due date adherenceBy monitoring due date adherence metrics & making operational adjustments, teams can optimize their service level, making it a crucial lab KPIRead more
Karen Van Den Bossche
Following her degree in Bioengineering, Karen spent 13 years working at Siemens as a LIMS specialist and R&D and PLM consultant in laboratories at global companies including Milcobel, Cargill, Merck, BioNTech, P&G, and many others across a range of industries. A great enthusiast for travel, exploration, and learning, since Karen joined Bluecrux as a Solution Manager in 2021, she has spearheaded the journey towards our new suite of Binocs dashboards and KPI reporting tools.