Monday, May 30, 2016

Process models; really helpful for process improvement?

"You have to model your AS-IS processes". How many guru's didn't tell us that? 
Oh yes, it is always nice to know what is going on in your processes at this moment.  I am not sure AS-IS process modelling will help you with that. 

Dynamics of Execution 
On twitter I regularly talk about ‘the dynamics of execution’. And that it’s hard to express those dynamics in a process model. 
Process models can act as a working instruction or as start for a design of an execution system, but if you use them in a process improvement project (and they are quite often used for that, I was told), I think they or often too far from reality. To far from reality to really understand why a process doesn’t perform.
In this story I’ll try to explain what I mean with the above. My thoughts about this were triggered when I had some conversations with people who are involved in hospital processes. They acknowledge process improvements should really be possible, but at a sudden point my brains started to boil when I tried to cover all the aspects that make up hospital processes.
In those environments you have to think about:
  • The (sometimes upfront unknown) process through which the customer (patient) ‘flows’
  • The availability of equipment, rooms and other facilities (the logistics)
  • Availability of care specialists
  • Planning and time that connects everything together
These are the aspects I mean when I talk about ‘the dynamics of execution’. No clue how to get that in process models, but I will attempt.  

It’s not my ambition to come up with a new modeling method or some kind of new ‘Manifest for process modeling', but my goal is to make clear:
  • What I mean with ‘Dynamics of execution,
  • Why I think it is important to be aware of those dynamics
  • Why it is hard to express in process models as I see them often.
But, to keep it a little light, I will not use hospital examples, but the good old Pizzeria.
What process models often look like
Many process models I’ve seen look something like:

  (I know, no formal BPMN. Complaints can be send to
This is a very understandable process model (although; actually it’s only the workflow of a process) and can serve very well to explain to an employee or customer what happens when a pizza is ordered.
This could even be the start of the design for a future process if you think this is the best way to process pizza orders.
In my opinion, models like above are only suitable for
  • Things like working instruction (this needs to be done when a new order comes in)
  • Designing systems to support processes ‘as they should run’ (Of course, to make it technically work a lot of objects need to be added)
  • Making auditors happy
But when this would be a model of the current process, it is worthless to determine why the process is performing well or not. It doesn’t tell me why orders are delivered too late or why pizzas burn in the oven.
To find out, you have to add information to the process model. Maybe then you can tell something useful about the causes of bad process performance. And based on that, implement some possible improvements.
 Extension with ‘who is doing that job?’
To make clear who is executing a step in a process, you often see swim lanes added. In that case the workflow is ‘matrixed’ with the ones who execute those steps. For the above model, it could look like this:

               But, what’s the added value of this if you want to understand why the process doesn’t perform? Not so much, I think.
You can see who (in theory) is responsible for each step, but you don’t see, for example:
  • How many chefs are available at what moment?
  • How much time do people spend on executing the steps?
If you want to say something about the performance of the process, these aspects should be visible in the model. I’ll try that a little later, but another thing that is important for understanding the performance of a process, is the amount of cases it has to deal with.
1 pizza order a week is not the same as 100 a day. Also this dynamic behavior should be made clear.
 Process mining/Simulation techniques
With Processs mining or simulation technology, you can visualize cases flowing through the process.
The ‘balls’ (as a representation of cases) try to make this clear:

 These kind of techniques might give you an idea of bottlenecks in a process. But be aware that bottlenecks are only symptoms.  To understand what causes these symptoms , you need to dive deeper into the process.
In the above picture, you see that 3 orders are being registered, but only one is being prepared. This causes a waiting line. How is that possible?
When you look at hospitals, most of the time it is caused by limited resources.
A surgeon can only do one surgery at a time. And when she is doing surgery, she can’t do consultations. In that case the throughput of patients is limited by the number of employees.  You hardly see this aspect of reality in process models. So, in one or another way it would be cool if you could enrich process models with executor-availability.
 Executor-availability in a process model
When I was thinking about how to add the availability of executors in a process model , I thought it should be some kind of matrix of swim lane vs number of executors vs step.
Assume that there are 3 waitresses, 1 chef and 2 deliverers in the ‘Deliver pizza’ process.  The model could look something like this (just made it up, it’s not about the model, remember?)
(there is a little typo in the picture I still need to fix; one of te Waitress 2  should be Waitress 3)
Combined with cases
When you combine the above model with balls (pizza orders), maybe it becomes a little bit more clear why there are bottlenecks in the process.

As you can see; at this moment, no new orders can be registered because the 3 available waitresses are already working on an order.
Besides that, you see something that is really hard to express in a model; there is only 1 chef, who has to execute 3 sequential steps for an order. When he is packing, he can’t prepare another pizza. How could you visualize this dependency?
Maybe by using some colors to express that they share one executor?

I think it is not that intuitive, so maybe another way could be by saying that the work of the chef is some kind of block, which can only contain one case at a time:

As you can see, it’s a dynamic aspect of a process that is hard to express in a model. But, it’s an important one to see why a process isn’t performing.
In fact it is some kind of on/off switch between activities; when one is executed, the others are idle.
People are not the only scarce resource
In hospitals the assignment of scarce specialists is not the only challenge. Also the limited availability of facilities like medical equipment or operation rooms has a big influence on the performance of processes.
You can have 10 surgeons, but if you only have 1 operation room, that will be the constraining factor.
In administrative processes you can think about systems (or better; licenses) of which only a few are available. Also these constraints should be visible in the model. At the pizzeria it might be the case that there is only one car to deliver (and still 2 deliverers). I tried to visualize this in the following model:

To understand this, you need the ‘dynamic balls’ that represent the pizza orders. They are cut in half now, because for one order, a car and a delivery person is needed.
This also shows unbalance in the process. There is a deliverer too much or a car too less. This means an opportunity for more revenue or less costs (depending on how many pizzas are sold every day)
The above picture looks a little like a petri-net as some of you might have seen before.   
And when I assume that the restaurant has an oven where 3 pizzas can be baked at the same time, that part of the process could look like this:

And again, this shows the problem of only having 1 chef. He cannot put a pizza in and remove one from the oven at the same time (although, I’ve seen some chefs…).
This creates a risk of burning pizzas because the chef has no time to remove them from the oven in time.
And that brings me to another aspect that has a big influence on the performance of processes (as experienced by customers); planning and time.
Planning and time
The above picture showed the important aspect of time; when a pizza stays in the oven too long, it burns. This is very much related to planning.
Of course it only shows up in the real execution of processes and that’s why it is important, I think. What makes it a little more complex is that you also have to take the availability of employees into consideration.
During the weekend, the pizzeria might have 3 Deliverers, while there is only one during week days.  Let alone the fact that they might have a bad day so now and then.
So the process (and probably it’s performance too) can change from day to day. This dynamic aspect will never show up in a traditional process model.
And that is also the case in a hospital. Assume that there are 2 surgeons, but only one operating room. While one surgeon is doing surgery, the other one can do consultations with patients. Making these kind of planning aspects clear in a process model; quite a challenge. Let me try…

Conclusion so far
Process models as I see them often, have a certain purpose, but don’t always help to understand the causes of bad process performance.
To get a little bit more understanding of process performance, you also have to know:
  • Number of cases flowing through the process (balls)
  • Resources
  • Dependency between resources
  • Aspects of planning (for example availability of employees)
Many aspects, but necessary in my opinion. If you want to show this all in process models, you need to add dimensions like time, cases and resources).
And that is even too simple. Because in real life there are more aspects like the availability of information, external parties that want you to be compliant, etc.
In this little story I tried to explain some aspects of real life dynamics. I also tried to express this in models. But as told, that was not my goal. Besides that, it’s also not so easy, as you’ve seen.
I tried to make clear is whether or not models, as they are made usually, are really that valuable for improving processes.
Or would it be better to improve the process little by little while it is executed. So little adjustments after evaluating each (finished) case.
Because in the end, the dynamics of execution will appear most of the time during…execution.
Do you recognize these dynamics of execution and how do you cope with them during your process improvement projects?
Happy Processing!

Tuesday, May 24, 2016

I don't need better processes

I still see a lot of process improvement initiatives in organizations. 

Leading to processes centers of excellence, improvement teams, Value Stream maps and fancy tools like process mining, 30-60-90 plans; all kind of stuff that should help to improve processes.

Sadly, I also still see that, after an enthusiastic start, all these initiatives might die a slow, theoretical, to-be dead. 

I didn't do any scientific research, but I think that is caused by a misunderstanding about what  "managing by process" is about, quite often.  

As noted in other posts; I am pretty sure, your customers don't care that you are improving your processes. 

They prefer that you execute your processes well; to deliver products and services that solve their problems. 

So, processes are just a means, not a goal. A means to deliver product, services or solve problems for the (process) customers. 

I've seen sometimes, when that's forgotten, that processes might end up in the hands of the improvement-enthusiasts, without a thorough awareness that only execution is what counts.  

In theory, every process can be the best. But, happy customers don't exist in theory.

Oh yes, go ahead if you want to spend some time staring at process maps. But wouldn't it be an idea to start with making clear what stakeholders expect out of a process? What promise does the process have to deliver? 

A delivered pizza within half an hour? An insurance policy with 0 mistakes? A machine that can run 2 years without maintenance?  Those are all process results!

And maybe that raises awareness that a process is just a means. It should be "used" to deliver what you promise. So I would take that promise as a base to start process discussions. Making clear how well the process is performing. Try to make that process performance visible to anyone in the process. 

And maybe then you discover process performance is not as desired. 

And yes, maybe then we could draw some blocks and arrows on the wall. 

Happy processing!

Monday, May 23, 2016

Does "the new gold" work for your processes?

In one of my posts on BPM cycles, I wrote that I see BPM as activities happening on different levels:
1. Execution of a process for a case
2.  Live monitoring and managing of cases
3.  Improving the process
4. Adapting the process landscape
To implement this, it might be a good idea to understand your processes. When I help organizations to understand their processes, I always take a look at the different enablers of a process.  Think about workflow, people, software, data, governance etc. In short; all the aspects needed to make a process perform. 
The enabler I’d like to focus on in this post is data. You’ve probably read millions of articles that tell you data is "the new gold". You also might remember the roaring days of Big Data or attended a conference where a hip data scientist told you that nowadays we produce more data in a minute, than our grandparents did in a whole millennium.
Cool, but what does it have to do with your processes? A lot. In many processes, data (or should I call it information?) plays a big role.  But data in general doesn’t mean anything. You need data to fulfill a need. That’s why I always make a distinction, like the cycles, between data on different levels to manage your cases and processes: 
  1. Data needed to execute the process for one case
  2. Data needed to manage all the cases currently in the process
  3. Data needed to improve the process(design)
  4. Data as a link between different processes
I will take my process “Deliver pizza” again to explain what I mean with the levels, mentioned above.

Data needed to execute the process
In the process ‘Deliver pizza’ the final result is a delivered pizza, but you need information to execute the different steps to reach that result. 
To make the pizza, you need information like:
  • What pizza?
  • What size?
  • What extra toppings does the customer want?
To deliver the pizza you need
  • Delivery address
  • Requested delivery time
  • Telephone number
If execution of the process is supported with some software, probably these will all be fields on nice digital forms. Or, when the customer has to do the most of the work, it can be fields in an app to order pizza. 
The examples above also make clear that the step “Register order” is only needed to get that information. That step doesn’t add value to making or delivering the pizza. It only provides the process with the needed information.  When you are developing a process, you have to think about the possible ways that information can arrive at the process:
  • Phone call
  • E-mail
  • Ordering website
  • App
So in the end it’s about the data, but in these days customer service also means that the customer doesn’t have a hard time to provide you with that data.
The above is all about data for one individual case. But, hopefully this pizzeria receives more pizza orders, what brings me to the next level of data in processes.
Data needed to manage all cases in the process
Assume that at a certain time in the evening there are 8 cases in the process. Executors are working on the individual cases, but someone has to take the role of process manager to ‘keep track of all the cases’. So you need information about “how cases are doing”.
To me that means knowing the status of all the cases in the process and how “you meet the promise”.
And that promise is different from product to product. In my earlier post I assumed it was about delivery time (within 50 minutes). So, to  manage all cases, information about this process goal is needed:

This is just process monitoring, but to that's the key to “doing what you promise”.  It’s what I call “on the playing field” because when cases are still in the process there is time to act.
One other thing to take into consideration is whether or not you want to share this information with the customer. It seems to be the “The age of the customer”, so do you want to let him/her know how the case is progressing? Take that into consideration during process design. 
This monitoring level is all about trying to deliver all pizza’s on time. After you have tried this for a while,  it might also give you an idea on how well the process in general is doing. Does the process perform as we designed it to do?
This brings me to level 3 of information in processes.
Data needed to improve processes
This level of data is about traditional process improvement. Does the process perform as we designed it, or does it need to be improved? It’s the good old Village People classic “PDCA”.
But to know what is “improved”, you first have to define “good”. In the end, no company was started to improve processes, they were started to do them well. 
In my simple example, good means "delivered within 50 minutes". But in real life more goals might apply, like:
  • Good taste
  • Warm
Or internal goals like
  • Profit margin on each pizza > 34%
(and not to forget all the goals that could be derived from law)
To check how well the process is doing (or better “has done”), data is needed about process performance. That could come from measurements, excel spreadsheets or management information within a workflow tool.
In fact it doesn’t matter, as long as the data tells you something about the performance of the process.  Since a few years, also process mining might be of some help. Process mining is technology that extracts data from the systems that you use to execute your processes and turns it into a “Process oriented view”.
Most process mining tools can show you that data in different ways. For example, a workflow picture that shows the average processing time of activities and the average waiting time between the steps on connections:

After a little calculating you’ll see that the average throughput time is 50 minutes and 33 seconds.  33 seconds more than the goal of 50 minutes. 
But, averages don’t mean so much. Most process mining tools also offer you the option to show minimum and maximum values.  This could give you an indication of the variance in the process.
The next picture shows minimum and maximum processing time for activities and minimum and maximum waiting time for connections.

Doing the math (process mining tools can do it for you, if you like) will tell you that the fastest case took 38 minutes and the slowest case took 1 hour and 25 minutes to finish. Is this good?
We could take a look by using charting functionality of process mining and show the throughput time of all cases in a graph:

Now you see that 7 of the 23 cases had a throughput time more than the goal of 55 minutes. Of course it’s your own choice to consider if this bad or not. At least for the 7 customers who’s pizza arrived too late, there is no time to fix this anymore, because we look at facts that happened in the past.
Besides that, all of the above process performance information only show symptoms. But to improve the process, you need to find the cause.
Some of the above information already might give some indications. You saw that cases spend quite some time waiting. For example between “Pack Pizza” and “Deliver Pizza

Still a symptom, but now you could do some research on why cases spend (on average) 8 minutes waiting before they get delivered.
Most process mining tools also offer the functionality to show the “flow of cases” in animations.  This is nothing more than some kind of replay of the log files, but experience learns that it makes bottleneck in the workflow more visual:

This level 3 of process information is all about good old process improvement. Level 2 information is what I would call daily process (or better: case) management and on level 1 we saw the information needed for individual cases.
In process projects I also consider another level of data; the data that flows between processes.
Data that flows between processes     
Most organizations have more than one process. For example; the pizzeria might have a process for pizza delivery and one for pizza take away. They probably also have a process to keep enough inventory (of tomatoes, dough, boxes etc.).
Data flows between these processes.  Because when you make 50 pizza’s at an evening, this changes the level of inventories.  So some data comes out of processes (50 pizza’s made) and has effect on other data (inventory of dough)
So “Deliver pizza” and “Keep inventory on desired level” are 2 processes that have a relationship.  That relationship is not the flow of a case, but it is based on data that is generated in those processes:

So processes put data into “databases” and other processes use information from that database. You could probably think of many examples. For example customer information that is updated in one process and used in other processes.
This level 4 of data is more on an organization (or process architecture) level, but it makes clear that the performance of one process can have influence on another one.
Data.  An important enabler of process performance, I think.  
But not the only one. That’s why I always like to take processes as a base for looking at organizations.
Because processes have many different aspects in them. Not only blocks and arrows (and if you like BPMN, also a few circles). That’s what makes it interesting to me.
Enjoy your data and happy processing!