Multi-Process Optimisation within Large Organisations
Tom Bevington
Introduction
There is increased interest in Australia in mechanisms to raise productivity prompted by a persistent per capita recession. The economic output per person has declined despite overall GDP growth. It therefore seems timely to look at how productivity is tackled and take this opportunity to introduce a very significant development to the proven XeP3 approach for which an additional patent is now pending. The development enables organisations to get their staff to efficiently tackle the synchronisation of all processes within the largest of organisations, in bite sized chunks, and optimise multi-organisation supply chains.
Productivity is measured in terms of the rate of output per unit of input. Producing more from the same effort is the goal and any staff and resources released can be deployed to more productive work. Increasing productivity reduces cost, shortens response times and improves the customer experience. In industry, productivity provides a protective moat against competition and, for many players, it is the foundation of business strategy. In the public sector productivity means the tax dollar goes further enabling, for example, front line staff, such as nurses and clinicians, to take care of more patients. Productivity gains enable real wage increases while wage increases without productivity just stoke inflation.
Government plays an important enabling role because it has the capability to create an ecosystem which facilitates private and public sector organisations to become more productive. Government policy can change the ecosystem so that everyone benefits, for example investments in infrastructure to speed traffic flow or improve broadband speed and availability. Government policy settings also can make it easier for organisations to raise their productivity by reducing burdensome regulations and deregulating labour, etc.. This article focusses on the organisational productivity opportunity which management can sponsor, and the staff who actually do the work can be guaranteed to deliver.
Addressing Process Waste and Inefficiencies
A search on productivity on the web will yield dozens of suggestions of what to do including take breaks, avoid multitasking, clean your workspace, prioritize workflow organization, delegate, develop employees, encourage team collaboration, identify most productive hours, limit distractions, optimise the workplace environment, the five-minute rule, etc.. The real questions are how to do it and where to look. These are not trivial questions as there are perhaps thousands of people and hundreds of thousands of activities in organisations and supply chains which require close co-ordination to achieve the synchronisation of the activities performed. It is therefore easy to make things worse or sub-optimise.
The most often used improvement approach is to select an area, the whole or part of a process or even a whole organisation and establish by interview where the problems are perceived to lie. This usually takes one to two months to determine and agree the target area(s). Then a concerning pivot occurs, the selection of one or more points for attention changes what began as an holistic end to end assessment to targeting improvements at a single point, or at best a few isolated points. Melbourne University research[1] finds this early, myopic narrowing of focus the first of three key reason for the changes ultimately implemented to be transitory and of limited value.
One consequence of this narrowing is that data is mostly only gathered around the targeted points. Once the problems are understood It then becomes costly and time consuming to trace back the identified problems to their causal areas. Once the causal area is located it gets even harder because it is necessary to get people, usually located in different parts of the business or the supply chain, to understand the need for change, to think through how they could restructure their tasks to help, agree to do the change and then deploy and maintain the needed change. Even in the event that the needed changes can be agreed and deployed, the scene is set for gradual decay unless the changes are fully understood by management. Gaining this management commitment requires even more effort and elapsed time. The Melbourne University research identified the failure to establish the mechanism to routinise co-operative change, i.e. establish a supportive socio-technical system, as the second important reason that implemented changes are usually transitory and of limited value.
There are two commonly adopted ‘fixes’ to bypass the need to convince those causing the problems to change. The most likely to succeed is to make a technology investment to enforce and lock in the change. This generally means delays of months or even years and capital expenditure. The second, a more immediate and generally lower cost solution, is to accept that change elsewhere in large organisations and multi organisation supply chains is just too difficult to achieve, so add in new work steps to intercept and deal with errors or omissions as and when they are found. In other words formalise the work around activities to check every incoming transaction and then correct it. This means the organisation replaces the informal work arounds with formal checking and rectification work steps and accepts the inevitable added costs and process delays in order to gain predictability and lower risk. In risky areas, such as medical procedures, this formalisation of work arounds no doubt goes a long way to explain why between 2019 and 2023 staff levels in the UK National Health Service increased by 17% while productivity fell by 11.4% and why there has been a decline of 12% in surgeon work activity at the same time that enjoyment has been crushed by frustration[2].
The work around activities, so often dismissed as insignificant, are essential to address routinely occurring glitches to keep the work flowing. They also hold the key to fast and effective productivity gains. Readers familiar with XeP3 will be aware that this apparent conundrum is easily addressed. Research[3] reveals that many as two thirds of the detailed activities routinely undertaken in an organisation, while absorbing as much as 49% of everyone’s time, are omitted from conventional process analysis. This research found that the missing activities are mainly the informal work around activities, invented of necessity by the staff themselves, essential just to keep the process going. They are work arounds which deal with process glitches such as chasing up missing data and correcting errors plus the idle time teams often incur waiting for a glitch to be resolved, etc..
These informal work arounds exist below the radar. This is a point which will be referred to again later. It is the key to achieving multiple process optimisation. The inability of conventional approaches to capture this ‘atomistic’ data is the third factor identified by the Melbourne University research for implemented changes to be transitory and of limited value.
What these informal activities always do is: introduce delays; absorb staff and management time; increase costs; reduce customer service and distract and frustrate staff which in turn can introduce risk. Of real concern is that these largely unrecorded work around activities, termed process noise, on average absorb 36.1(?)%[4] of everyone’s time, that is almost 2 days in a 5 day working week. In an organisation employing 1,000 people, it means 361 full time staff equivalents are being paid full time to do work arounds and deal with the consequences of the glitches. Harshly, the cost of the wasted 36.1% of total staff time is the price that management accepts to increase the cost of doing business, to introduce delays which negatively impact customer service and to cause staff frustration. This situation is not getting better either. The figure a decade ago was 33.6%.
The XeP3 data can be quickly and efficiently captured by engaging those doing the work. It is extremely efficient and quick because each person only needs to input what they uniquely know – the activities which they themselves do, including the 2 in every 3 work around activities they themselves may well have invented to deal with the glitches they encounter. Once the data has been captured the XeP3 tool is used to assemble the data base into strategic processes which are then used to precisely locate the many causes of errors and omissions and their consequences and allocates the time spent on work around activities (the process noise) to each and every cause.
Applying Pareto then provides intimate detail of the 10 or 15 causal factors which must be addressed in order to remove most of the waste, cost, delays and frustration. In most organisations about half of the process noise, 15-20% of the total time, can be quickly released back to the organisation by implementing low cost behavioural changes at each of the pinpointed sources. At the same time measures are established to ensure the cause is eliminated so that the new state endures. These measures are team located measures which ensure delivery of particular organisational KPIs.
Approaching Multi-Process Optimisation
The first need is to be able to synchronise the total system and avoid local optimisation arising from a single point focus. To illustrate the need, consider the dilemma of a clinician, an experienced surgeon. He or she will have no time to investigate and little influence on what happens elsewhere in the hospital. He or she decides that the extra cost and delays from intercepting some common glitches is worth the investment of the theatre team in time and delays in order to reduce the risk of wrong site surgery and, of course, the potential life time care consequences of adversely affected patients. The extra costs and delays, while mitigating risk, obviously reduce total system capacity and sub-optimise the hospital system. The need surely is to be able to link the glitch, and therefore the time wasted in work arounds, to the location where the cause lies and purge it from the system.
The second need is for the synchronisation to be applicable to large, complex organisations. Public hospitals are perhaps an extreme example, they are large and their processes are complex and difficult to map and measure because the patient journey is non-linear. A patient’s journey next step will often be determined by the results from the present step and could be anywhere within or outside the hospital. Now add in the complexities from thousands of employees, multiple buildings and split sites, 24/7 operation with critical shift handovers, hundreds of disciplines, multiple external service providers, high risk environments and split management accountabilities where management is responsible for the processes and clinicians for the safety of patients. Conventional process mapping, developed for linear processes, is quickly overwhelmed by the needed detail and the huge number of linkages arising from mapping each glitch.
Staying with the example of a hospital with say 5,000 staff, it is technically feasible using XeP3 for data to be collected from everyone in the hospital in 2 or 3 elapsed weeks, then linked into the processes and solutions developed, agreed and deployed making use of XeP3’s socio-technical deployment capabilities. Practically however this would be a logistical nightmare. The obvious solution is to approach the task process by process – an approach already tried, tesred and written up as a case study using XeP3 in Melbourne’s Peter MacCallum Public Hospital. The first application there was in Imaging to resolve their problems and familiarise the in house team with the tools. Delays of 29 days were reduced to same day, capacity was increased and significant cost and staff time savings were realised.
The team then continued on their own to successfully apply XeP3 in the other hospital processes.
The application in Imaging illustrates how XeP3 multi-process optimisation works. The imaging process in the Peter Mac, in common with many hospital processes, serves a number of different ‘customers’. These include the patients who are scheduled ahead through regular appointments to which is added the random demands for urgent imaging which could come from anywhere in the labyrinthine hospital system. The objective of the engagement was obviously to document and measure the current reality and then agree and deploy the changes needed to purge the main work arounds at source in both the scheduled patients stream and the urgent, apparently random demands stream.
A recent and relevant development of XeP3 was employed - the way people collect the data – which enabled the location of the glitch causing activities to be pinpointed with absolute precision. This in turn allowed just the relevant staff in these areas to be immediately drawn in. The input required from them was minimal – an hour or two. They were required to input only a tiny fraction of their work, the relevant activities associated with commissioning scans – the atomistic data needed to complete the total process. However, although their participation was tiny, they became full partners in the Imaging socio-technical system to purge the major noise drivers from the Imaging system. The contribution of their data gave them the ability to read the assembled end to end process display and pinpoint where they could have real impact. The benefits to them also became very obvious beyond faster service – removing the irritating enquiries and delays (rework for them) which bounced back from the Imaging team as Imaging sought the needed input to fix the glitches.
The participants then had the option to either use the behavioural change indicators (BCIs) themselves to monitor continued compliance to the deployed solutions, or leave it to the recipients in Imaging to use a BCIs check list as originally envisaged by Mr Tony Giddings[1] in the UK, i.e. to confirm continued compliance not initiate time wasting work arounds!
The team then continued onto to tackle other priority processes. They helped the staff eliminate the causes of the noise which mattered both in the new target process itself and in the parts of the hospital which required service from it. As in Imaging, both parties benefited from systematically purging the important glitch caused noise. Logically therefore, If all the processes in a 5,000 plus person hospital system were tackled in this way, then all the significant noise causing practices, the main noise drivers, would be purged – delivering multi-process optimisation progressively in bite sized chunks.
The essential points to make are:
The Imaging process optimised the process for both the ‘Regular’ stream of imaging patients and the ‘Urgents’ by delivering adjustments in work practices in the Imaging team and in all other parts of the hospital which were causing significant noise in Imaging.
Both the ‘Regular’ stream of patients and the 'Urgents’ benefited from the reduction from 29 days to consistently same day.
The clinicians, nurses and staff in Imaging team and the urgent requestors benefited from removal of the time wasted and frustrations caused by the noise to both parties.
The changes targeted the elimination of the noise – the many glitch work arounds, and by applying Pareto, only those which mattered. It did not require any changes to the productive or professional work activities in either location – it just removed the clutter, waste and frustration.
The process-by-process optimisation of the total system progressively removes waste and delays from the entire organisation.
Glossary
Glitches
a generic term to describe non-conformities in business processes. They include the need to: search for missing information, equipment and staff; re-schedule to deal with late comers; correct errors and omissions, etc..
Work arounds
the steps or activities staff need to perform to deal with each glitch. The steps include phoning around to locate something or someone, searching for missing information, correcting errors and omissions, dealing with the consequences including for instance phoning apologies, replacing damaged items, compensating customers, etc.
Noise
or business process Noise, is the amount of time spent doing rework. To be useful Noise needs to be associated with, and accumulated by, each and every cause.
Noise drivers
the activities or tasks carried out which by error, omission or orientation causes a glitch to occur elsewhere in the organisation.
[1] [4] “Solving the paradox of Lean Management’s low success rate”, Samson et al, Australian Journal of Management, February 2025
[2] Independent investigation of the NHS in England, Professor Lord Darzi, Department of Health and Social care Social Care, 12 September 2024
[3] Implementing Strategic Change, Bevington and Samson, Kogan Page London
[5] Retired Consultant Surgeon, former member of the UK National Clinical Advisory Team, former specialist advisor to the UK Parliamentary Enquiry into Patient Safety.
The Author…
Tom Bevington
FOUNDER
Tom is the founder of XeP3 and one of Australia’s best thinkers and leaders in delivering fast planning and deployment of strategically focused business process change. He has international partner-level experience in management consulting with The Boston Consulting Group and AT Kearney. He is co-author with Professor Danny Samson, Operations Management head at Melbourne University.