In one of my posts on BPM cycles, I wrote that I see BPM as activities happening on different levels:
1. Execution of a process for a case
2. Live monitoring and managing of cases
3. Improving the process
4. Adapting the process landscape
To implement this, it might be a good idea to understand your processes. When I help organizations to understand their processes, I always take a look at the different enablers of a process. Think about workflow, people, software, data, governance etc. In short; all the aspects needed to make a process perform.
The enabler I’d like to focus on in this post is data. You’ve probably read millions of articles that tell you data is "the new gold". You also might remember the roaring days of Big Data or attended a conference where a hip data scientist told you that nowadays we produce more data in a minute, than our grandparents did in a whole millennium.
Cool, but what does it have to do with your processes? A lot. In many processes, data (or should I call it information?) plays a big role. But data in general doesn’t mean anything. You need data to fulfill a need. That’s why I always make a distinction, like the cycles, between data on different levels to manage your cases and processes:
Data needed to execute the process for one case
Data needed to manage all the cases currently in the process
Data needed to improve the process(design)
Data as a link between different processes
I will take my process “Deliver pizza” again to explain what I mean with the levels, mentioned above.
Data needed to execute the process
In the process ‘Deliver pizza’ the final result is a delivered pizza, but you need information to execute the different steps to reach that result.
To make the pizza, you need information like:
What extra toppings does the customer want?
To deliver the pizza you need
Requested delivery time
If execution of the process is supported with some software, probably these will all be fields on nice digital forms. Or, when the customer has to do the most of the work, it can be fields in an app to order pizza.
The examples above also make clear that the step “Register order” is only needed to get that information. That step doesn’t add value to making or delivering the pizza. It only provides the process with the needed information. When you are developing a process, you have to think about the possible ways that information can arrive at the process:
So in the end it’s about the data, but in these days customer service also means that the customer doesn’t have a hard time to provide you with that data.
The above is all about data for one individual case. But, hopefully this pizzeria receives more pizza orders, what brings me to the next level of data in processes.
Data needed to manage all cases in the process
Assume that at a certain time in the evening there are 8 cases in the process. Executors are working on the individual cases, but someone has to take the role of process manager to ‘keep track of all the cases’. So you need information about “how cases are doing”.
To me that means knowing the status of all the cases in the process and how “you meet the promise”.
And that promise is different from product to product. In my earlier post I assumed it was about delivery time (within 50 minutes). So, to manage all cases, information about this process goal is needed:
This is just process monitoring, but to that's the key to “doing what you promise”. It’s what I call “on the playing field” because when cases are still in the process there is time to act.
One other thing to take into consideration is whether or not you want to share this information with the customer. It seems to be the “The age of the customer”, so do you want to let him/her know how the case is progressing? Take that into consideration during process design.
This monitoring level is all about trying to deliver all pizza’s on time. After you have tried this for a while, it might also give you an idea on how well the process in general is doing. Does the process perform as we designed it to do?
This brings me to level 3 of information in processes.
Data needed to improve processes
This level of data is about traditional process improvement. Does the process perform as we designed it, or does it need to be improved? It’s the good old Village People classic “PDCA”.
But to know what is “improved”, you first have to define “good”. In the end, no company was started to improve processes, they were started to do them well.
In my simple example, good means "delivered within 50 minutes". But in real life more goals might apply, like:
Or internal goals like
Profit margin on each pizza > 34%
(and not to forget all the goals that could be derived from law)
To check how well the process is doing (or better “has done”), data is needed about process performance. That could come from measurements, excel spreadsheets or management information within a workflow tool.
In fact it doesn’t matter, as long as the data tells you something about the performance of the process. Since a few years, also process mining might be of some help. Process mining is technology that extracts data from the systems that you use to execute your processes and turns it into a “Process oriented view”.
Most process mining tools can show you that data in different ways. For example, a workflow picture that shows the average processing time of activities and the average waiting time between the steps on connections:
After a little calculating you’ll see that the average throughput time is 50 minutes and 33 seconds. 33 seconds more than the goal of 50 minutes.
But, averages don’t mean so much. Most process mining tools also offer you the option to show minimum and maximum values. This could give you an indication of the variance in the process.
The next picture shows minimum and maximum processing time for activities and minimum and maximum waiting time for connections.
Doing the math (process mining tools can do it for you, if you like) will tell you that the fastest case took 38 minutes and the slowest case took 1 hour and 25 minutes to finish. Is this good?
We could take a look by using charting functionality of process mining and show the throughput time of all cases in a graph:
Now you see that 7 of the 23 cases had a throughput time more than the goal of 55 minutes. Of course it’s your own choice to consider if this bad or not. At least for the 7 customers who’s pizza arrived too late, there is no time to fix this anymore, because we look at facts that happened in the past.
Besides that, all of the above process performance information only show symptoms. But to improve the process, you need to find the cause.
Some of the above information already might give some indications. You saw that cases spend quite some time waiting. For example between “Pack Pizza” and “Deliver Pizza
Still a symptom, but now you could do some research on why cases spend (on average) 8 minutes waiting before they get delivered.
Most process mining tools also offer the functionality to show the “flow of cases” in animations. This is nothing more than some kind of replay of the log files, but experience learns that it makes bottleneck in the workflow more visual:
This level 3 of process information is all about good old process improvement. Level 2 information is what I would call daily process (or better: case) management and on level 1 we saw the information needed for individual cases.
In process projects I also consider another level of data; the data that flows between processes.
Data that flows between processes
Most organizations have more than one process. For example; the pizzeria might have a process for pizza delivery and one for pizza take away. They probably also have a process to keep enough inventory (of tomatoes, dough, boxes etc.).
Data flows between these processes. Because when you make 50 pizza’s at an evening, this changes the level of inventories. So some data comes out of processes (50 pizza’s made) and has effect on other data (inventory of dough)
So “Deliver pizza” and “Keep inventory on desired level” are 2 processes that have a relationship. That relationship is not the flow of a case, but it is based on data that is generated in those processes:
So processes put data into “databases” and other processes use information from that database. You could probably think of many examples. For example customer information that is updated in one process and used in other processes.
This level 4 of data is more on an organization (or process architecture) level, but it makes clear that the performance of one process can have influence on another one.
Data. An important enabler of process performance, I think.
But not the only one. That’s why I always like to take processes as a base for looking at organizations.
Because processes have many different aspects in them. Not only blocks and arrows (and if you like BPMN, also a few circles). That’s what makes it interesting to me.