Simon Bach was the first employee at Processand.
First employee of Processand, Senior Data Scientist Simon Bach is going to tell us about how does the usual process mining implementation go from technical perspective. How to extract the data, set up the process and build the dashboards - all that and more in Mining your business podcast!
Welcome, welcome, welcome to the Mining Your Business podcast. The show all about process mining, data science, and advanced business analytics. I am Patrick and with me as always, my colleague Jakub. How are you doing today?
Hey, Patrick. I'm doing quite well, thanks.
Today's episode is all about the implementation of process mining from start to finish, data model woes and of course, and how it functions. Today's guest is Simon Bach.
Patrick, we made it to episode seven, which is a really really exciting as this is even pre-launch. So we are already working on episode seven and the podcast is not officially out yet. So it's a time for celebration, I would say.
And in any case, we have yet another guest today with us. It's our colleague, a fellow data scientist, Simon Bach. Simon, hi.
Thanks for having me.
Awesome. So, Simon, you are actually the first employee of Processand, if I recall correctly. So you're the most senior data scientist of us all. In today's episode, we actually want to ask you a lot about how the implementation of the process really goes and what are the ups and downs and what do we really have to worry about and what we do. But before we get there, could you briefly introduce yourself to the audience? So like what is your background, where you are coming from and so on?
All right. Sure. I wasn't actually born a data scientist as most in our company. I guess I mean, actually, I started as a lab assistant, so very far away from any data. At some point I got bored by this job and I decided to make my, well, I guess you can translate this to a high school degree, abitur, our German listeners will know, to become eligible for university. And then I started physics, yet again, far away from data science. And then I thought, well, I do something useful for humanity. I mean, not claiming that physics is not useful for humanity, but it's like yeah, okay. Not to get too deep into it. Long story short, I ended up in an interview with Thomas, and he liked me, and I guess the listeners know who Thomas is, I guess you already heard from him. And then he hired me. One of his greatest achievements. And, well, now I'm here doing data science. Yes, well, okay, that's how far my background goes.
Let me ask. Did you know about process mining or what did you know about process mining before you actually went to the interview with Thomas?
Well, I knew that it's a word, I Googled it, and I had very limited information about data science. I knew what Wikipedia knows.
Okay. Okay, that's a start.
Simon, we already talked about it in our first episode with Patrick. We know that the work or the job of the data scientist is not only working with the data and actually coding, and we know that at least 50% of our job actually lies in meetings and writing emails and talking to the business people, which can be challenging, especially for people who really do like to get their hands dirty with the coding. But the point of today's episode is actually to talk to the to those lesser 50%, and that's getting the hands dirty. And so my question really is, the theme will be if you could walk us through the implementation from start to finish. So since we already talked about order to cash and purchase to pay processes in previous episodes, I would just suggest let's pick purchase to pay process and let's say that you are facing a customer who wants to do this implementation. So all talking aside, let's just say that. How would you lead such a initiative from start to finish from the technical perspective? So we come to the customer, let's say that we don't have to really worry about access to anything, that everything is set up and everything is smooth, ready to go. What do we do? What do we start with?
Okay. In this fairytale world that you're describing, I go right away to the customer and tell him what are the technical requirements that we are able to access in their source system. So usually you have SAP in place. I mean, to me it doesn't have to be SAP right? We worked with ServiceNow ticketing system and so on and so forth. You need the requirements that you're able to just pull the data. And if we're talking about Celonis and IBC, it's usually called the extractor. And they have to set up and they extract the server. They have to also install some module to their SAP system or some, don't quote me on that, but they have to fulfil some technical requirements that say that we are able in the IBC to even pull the data in the first place. That's the best scenario. Adverse scenarios that it doesn't work because there is no connector for the system. In this case, I also, once upon a time wrote an extractor, but first of all, make sure that data extraction is set up. It's very first step for technical implementation.
So there are multiple requirements on the actual client side that need to be fulfilled before your actual work can start. Is that right?
Long story short, yes.
Okay. Okay. So let's assume that all these steps are taken care of by the client. And now you are ready to go. What are your first moves? What do you make sure? What do you check? Where do you start?
Okay. What I make sure is well, first of all, we have the connection, we pull the data, and I check the data quality. Is the encoding correct? Does it seem to be complete for our purposes? Right, I go through what we want to implement and, let's just pretend we are doing a purchase to pay, we better have the purchase to pay tables in place. We don't necessarily need sales tables or we could have them in addition, but better have the purchase orders ready. So check for completeness and think about if we could well build the KPIs or build the data model for the process we're looking at.
So essentially what you do is that you are accessing customer database and you are pulling the data int Celonis instance. In this case, I assume you talked about IBC. So we have a cloud instance. There are also onsite instances which work in a bit different fashion. We will definitely cover it in our future episode. What I'm trying to understand now is I know that in some cases we are working with some limitations on the memory capacity that we actually have. So we can't really just pull everything or all the tables. Is there something some way that you have to actually prepare the data and eventually trim it down? And how do you actually go about it?
Yeah, sure. So there are the very, very big bad boys of tables, which is the Changelog, as you can imagine, in an ERP system. Stuff is changed all the time and you don't want to pull the lock for everything. So you just trim it down. You just say, okay, I have certain parameters or certain flags that tell me what is changed, and I just pull the required changes for the process I'm looking at. Second, you have time limitations, right? So you could say I pull like one or two years of data. That's the easiest of two things that you can do, which is the most obvious thing there are. Less obvious things, which I don't want to talk about in length about it because it highly depends on the customer. And if data is a limitation for big customers, sure, for smaller customers, you go ahead and could pull almost everything. Right, because the data size is not that big of an issue.
What I'm also wondering is I know customers sometimes come with a very big expectations and they want to have, let's say, live stream of data. Is this even technically possible? And how do you go about updating the data as you are still having to access the database? And there are definitely some limitations to what you can and cannot do.
So livestream is a big word. I would say you could just try to make the time Delta as small as possible.
What is a Time Delta?
Okay. So I'm just loosely speaking, I mean, you have like certain times let's say. Okay, you pull the data, you transform the data, you put it into the data model and make it accessible for the customer, right? Let's call this moment T1. Right. I'm very physicist now. Right. And then I'm pulling the data again and transforming the data again. And I push it again to the cloud. Let's call it T2 and T2 is the time between those two moments, this I call the time delta. So basically what I'm saying is that you can try to make the time in between as small as possible, but it will never really be a true live feed. Or at least not right now. Maybe sometimes in the future when we have hovering cars and so on. But what we can do always is customers always love to talk about live feed or time delta. And so on and so forth. The reality is you have to define what do you mean by that? How often do you need the data? Do you need it once a week? Once a month? Once a decade? That's pretty rare. Or like daily. And then you try to define it. Business people always have ideas what they want to do. And then you say, okay, and then challenge to people and then say, What do you mean by that? Define what you mean. And then they start thinking, Okay, I want to have this KPI on a daily basis because I steer, I have a steering objective with that, right? For example.
So once you have customer data and like you said, it can take some time, but this time Delta, that you mentioned that between pulling the data and showing it, there's some time delta. How do we go about reducing the amount of time it takes to pull the data? And I kind of want to get onto a Delta load. So can you describe kind of what's the difference between a full load and a Delta load is and how that works together?
Yeah. Okay. So the full load is basically oh, look, there's a bunch of data. Let's pull everything and just put it into the cloud, right? And the Delta is more nuanced let's say. Right? So, okay, that's a bunch of data, but there's a big pile that I already have and just take what's left on the other side, right? In the delta load, you're basically thinking about how can I characterize the data that I need? How can I tell apart the data that I need from the whole bunch, that is there, thus reducing time by pulling the data that's, well, it seems like a no brainer, but yeah, yeah, you actually have to think about it. It's not always easy to know if you need the specific data set which is lying they are, or this row in this table if you need it or not. Sometimes it's clear because there's a timestamp telling you I am data X Y and I was changed today and then you see, okay I pulled the data yesterday so I need this row x y, right. But sometimes you don't know, you just know when the real was created. But you actually what you don't know is was it changed in between so.
So what do you do in those cases when you don't know if the row was changed.
That's a tough one. It depends, of course. So let's say the customer wants a daily Delta, and the customer says, okay, I'm interested in the KPIs, one, two, three, and in this one row that might have been changed, has nothing to do with those three KPIs. It might be good to not pull it, to just make the process faster because the customer is not really interested. I mean, okay, I am implying now that I know that this one row is like it's no importance whatsoever. So just for completeness, maybe once a week we get all the data synchronized, but for the sake of the delta. For example, let me tell you a story. We have a customer that they one want to have a data every 3 hours. But they have a steering perspective for their production with that. Right. So what we do instead okay. What do you want to steer, what you are interested in, really? And they are interested in material data and so on and so forth. So we just pull that information and we pull the whole information once a day so that we are even able to provide them new data every 3 hours.
I think you're touching a very interesting topic here and that's essentially, I know it's going to come back to the communication with the client, but you know, that you are trying to find a common ground with them. As you know, there is expectation and there is a reality. And then our job and your job is essentially to meet them in between so that we make their reality somehow achievable. But at the same time, we respect the technical limitations we have. By saying that, I would like to also ask you, what are the most common problems that we usually face and or that you'll recall at the moment that we are facing with pulling the data? If you can remember anything.
So when we leave the fairy tale world, it's basically the most common problem. We should have access, nut we don't. But other than that, if we are talking about customers that have huge amount of data that we are saying, okay, you wanted to have like two or three years of data, that's not possible. Yeah. Let's start with one year and then the customer is said, but we are saying, yeah, well, it's either one year of data or no data at all. So I guess they understand in most of the cases. Other than that, if you have too much data, maybe computations take too long and it leads nowhere. Then if you don't know the process, really, you don't know what data to pull in the first place. So I would say I would go with these two issues. And if it's also new data, you maybe know what you need, but you don't know how to restrict it. I mean, you know the table in principle, but you don't know which rows in that table might be interesting for you or not. So with experience there, you would know more and more, I guess, like a P2P process purchase to pay process I could do in my sleep, but I guess I'd rather sleep. And then so it depends. I mean, if it's a process that we know well enough, then we go with a standard. Then we even have a standard.
Before we move on the next topic, this is also very interesting and you're saying that sometimes we implement processes that are not quite standard and require, I would say more time and more knowledge and are generally more challenging. How do you approach when you have a customer who wants to implement a process that you've never seen before? Because for me, usually this is one of the biggest challenges as I feel comfortable with whatever I've already done before. But the new stuff, you know, it's usually a bit difficult then when there isn't a little support from the customer side on the process itself, then it gets it can get quite tricky.
Okay. So yeah, in nine out of ten days I like being bottom up but there I try to be top down, right? Because you don't know the process at all. So you just talk to the business people to ask them, okay, what happens there even? Like, what do you do and what do you want to measure? Maybe sometimes even challenging. Why is this important? Because if they think about what they want to measure, they say, Okay, well, maybe let's not measure A, measure B, or something like that, right? Like, like challenge them in what they want. Just take it as given right? If you then have an understanding about what's happening, it depends. In all the cases you talk to, their IT, right? But if it's a very, very customer crafted system, there is not a forum in the Internet telling you, yeah, that's how we implemented it, legally, haha, you shouldn't. So in all the cases we talk to their IT, but if it's a system that has broader user base, if it's like just one specific SAP system, you try to figure out as much as possible before you get to the IT so that you can ask smart questions. Not to sound smart that we always do, but to even get like the useful information out of it. Right? Because if you hear it the first time, you don't know where the problems could be. So get as much information as possible, but do it top down first, talk to the business and then talk to the IT and then try to merge this information. I actually also like to have it like two separate meetings, one with business, one with I.T. because if you have both in the room at the same point of time, you drift into details. While you haven't understood the process yet and I'm guilty myself, I like details, but sometimes it's not the best way to go to like right dive into the details, like just try to understand what do you want to describe. Do you want to describe an elephant or a cat? And then you try to.
So we have the data now. So we've gotten that and now it's time to actually take care of the transformations. Right? So how does building the transformations work and why SQL?
I think, Patrick, before we even go there, I would first like to understand what is a transformation. You have the data and then like what do you do?
Well, you have the data. The funny thing is that the customer has the data all along, right? But just doesn't use it in that way. What do you try to do is that you try to find the data points that are interesting for the customer and then you just massage them a little bit, like make the data feel comfortable and then just, okay, no, bad metaphor, haha. No, you have the data points and you just want to put it into the right places that you can grab it easily. The transformation is all about that. You have the data, but as it is structured right now, it is very well, very complicated or even impossible to just create KPIs out of them.
And if we are talking process specific, then what do you actually have to do to, so that the customer eventually sees a process in Celonis or other process mining tools? How do you go about transforming the data that you pull into visualizing the process at the end?
Okay. Very specific example, the customer talks about setting a setting some flags. There is, I don't know, a payment block or there is shipping and there is whatever. Then you think about, okay, what actually constitutes a payment block? A payment block is the flag that is either there or not if you look into the payment block field, congratulations, it's either there or not. But when it happened, it could be anywhere in time, right? I mean, so you go into the Changelog for that. For shipment or receiving goods, you go to goods receipts table and all information you have, you have to just connect to your, abstractly speaking, central entity. But more specifically in purchase to pay it is the purchase order item.
So you want to have one process stream per case, that is very abstract, but you just want to have the whole purchase process from end to end and you want to see it for each purchase order item. You want to have a nice flow and you just take all the information that the customer wants and you think, okay, this constitutes a change, this constitutes shipment, this constitutes some stuff going on in accounting and you are connecting the dots. Connecting the dots is probably the easiest way how to how to say what happens in transformations. So you have table ABC and you want to connect A B and C in a meaningful way and you can't just connect every row with every row. That's dumb and even wrong so that's what you do in the transformation.
And how do you technically do that, I assume that you're using a coding language? I actually know you're using SQL, what do you actually do there?
Since you're also a data scientist in our company, that's why you probably know, haha.
Shhh, don't tell anyone! I'm just outsourcing everything, haha, joking.
Well, we're using SQL structured query language, okay, now I sound very smart. Right? Okay, let's just say it's Excel for very, very high amount of data. All right, so you have like a structured way how to pull data from a table, how to connect the dots, as I said, that you call joints, and, well, databases are very efficient in just querying data and connecting. I mean, that's their whole job, right? And I mean, SQL is just the language how you talk to a database. It's not, hey, man, what's up with the data? It's like, SQL select X Y. It sounds way more boring, but at least it does the job, right?
How do you figure out what to join and how?
What and how. There are so many answers to that. Some things are no brainers. Do you have header tables and item tables and you just connect them? Okay. This header, I connect them by the header ID, so-called foreign key relationals. You have the header ID, you connect them via that or you have changelog, the changelog knows what entry was changed. You can also connect by this this key, so most of the time it is foreign key relations.
What I'm trying to get at is when you are let's say in the purchase to pay process, you have your purchase order and then your customer says that they also want to see the invoices. How do you actually go about connecting the dots in here? Because again, this is one of the things that I when I started as a data scientist found quite challenging. It's obvious that there is a bunch of documentation, especially in SAP, but it's not always that clear what goes where, especially for someone who essentially was never, never introduced to any kind of accounting or any kind of business processes in his university. So how do you go about this?
So it's kind of little bit detective work. Sometimes you have something that is quite clear and has a standard way and you know that the invoices are always connected via these dots, but sometimes it isn't really clear. And you have to talk to the customer. You have to tell them, okay, how does it work, how is it connected? And even though, okay, maybe you don't ask him how it's connected because the customers don't know, but they don't know that they know, right? So you ask them, okay, can you give us some examples? And then sending when they send the examples from the examples, sometimes it is clear how to connect, at least most of it. Sometimes that sounds a little bit pessimistic. Most of the times by them giving you some examples, you know, okay, I see this ID from table A and the table B and I kind of get how they are connected, you try to find similarities and then you're working your way through. And also before you talk to the customer, you try to figure out on your own and there is also documentation tables like it's ERP system, it's documenting itself and they are of those tables and they are also forums and beautiful websites like Lean-ex, which I haven't heard about ever before, before I started here. And now it's one of my more recommended website, maybe even more than, I don't know, Facebook or something.
You're basically saying that you're getting your recommendations on your Google page. You might also like reading about this very new SAP table.
Oh yeah. People who called this site also called the X Y.
So I'm glad you spoke about table relations and things like that. But really for the process mining aspect, we need specifically an activity table. So I kind of want to ask you, what is the activity table for and what sorts of information do you put inside of it to achieve KPIs and things like that?
Okay. The activity table, I mean, a big part of that I already described in a way that I that I said I like to connect the dots, right? So we are that far that we connected the purchase order item with the respective dots or whatever like shipment or setting a flag or whatever. But now you also need a timestamp so the whole idea of the activity table is that you know that certain things happened at a certain point of time, connecting the dots is the first step and then you need a meaningful timestamp at either a change date or booking date or creation date or some whatsoever entity or accounting document. So pulling everything together for one specific purchase order item.
These things I think would be quite applicable for any process mining tool. And I would like now to get a bit specific on Celonis. And my question would be so let's say that you already extracted your data, you did some transformation, you have your activity table. Is there something else that you need to do? And like what is the next step in your job when you already have prepared the data and you have your activities? I still think that we are not quite done and that there are still needs to be one step before the user actually sees the data in the front end. So what do you do next?
Okay. What I do next is basically, I have the activity table, it has to have some structure. I don't give it a bunch of timestamps. I give the timestamps a name like what happened there. And I also let those know which purchase order item they were specific for. And what I do next is that I build up a data model and in the Celonis data model you have specific roles for two tables. Two tables are very very special. And those are the activity table and the case table, though. The case table is the purchase order item. So Patrick already asked what the case is, and I guess you'll remember also. Those can have the role, that if Celonis knows this activity table to the case table, Celonis can calculate the process flow from that. And then you have some meta information, it could be a whole lot of things. You build a data model around that, but the core of the center is the case and to the case attached is the activities and then your data model could be as beautiful or ugly as you want it to be, thorough let's call it thorough.
So you have your data model, you probably have to do some execution so that it's actually loaded into memory and everything. Yeah. What's next? I think we are still missing the the most. Well, for us, I know that we think a bit less of building some frontend stuff because we proved ourselves in being the data scientist and we like the data preparation and everything. But for the end users, for the customers, the front end eventually is something that holds the most value. And we in one of our four previous interviews we had a customer success manager from Celonis here who basically appealed on us as a data scientist to try to build as best looking dashboards as possible as this is usually the only output that the client sees, regardless how well optimized the query is in the background. So what do you do? How do you build a report once you will have all your data ready and your data model is also ready. So what do you do next?
Okay, I then start to create the analysis so if you are a viewer, you're not aware that Celonis has actually a very nice way of how to build up an analysis. They also have a language, PQL, it's process query language. You have components and you just drag and drop them. That's actually very nice. I don't want to down play Celonis as a drag and drop tool, but at some point in the process or at some point in like creating an analysis for a customer, it is. So you build up the analysis. Either you already have a standard, you're thinking about what do you want to filter? How do you how do I realize that for a customer who wants to filter the purchase organization. Okay, let's have a dropdown for a purchase organization. And it's not that I am like inventing a dropdown button, Celonis has already invented it and gave me the component and I just drag and drop it and I just tell the component what the purchase organization is and the component does its job and it's just constituting out of all those components. That's the usability. And as far as the KPIs go, I just translate what the customer wants into the process query language. And then we have components that can show you, KPIs that can filter, that can show you a time series of some KPIs. All of that.
And is it sometimes difficult sorry to to jump in here, is it sometimes difficult to define your KPIs the way that the business wants into this PQL language?
Well, often it's clear. Sometimes it's difficult and sometimes it's impossible, it's very tricky. What you are not thinking about is that okay, you have a data model and you always want to connect the data, right? Sometimes it is possible to show certain information encapsulated in like one KPI. But what you can't do is that you can show all the data with it, because then you would have duplications, you would sum up some values multiple times and you you have to make sure that, okay, well maybe, maybe it is nice to tenfold the revenue of a company, but the thing is, if it is ten times less, then it's probably like a bad KPI that you define, so you have to think twice. Can I really do that in one table or not?
Speaking of these KPIs, there is one function that usually everyone one starts working as a data scientist is struggling with. It's called a pull up function and we all know in our company that you are a pull up master. So what I want to ask you if you could briefly for our audience and if there are any aspiring or starting data scientists who are still trying to grasp the meaning and the functionality, pull up function in Celonis, could you go ahead and explain us.
Yeah sure, let me think about what I want to say about that. Pull up is actually a very nice functionality that that Celonis thought of. For as long as I do process mining which is now like two and a half years close to three, I guess, the idea is that, okay, the story that I'm telling is if you think about the, the duplications that I was talking about, what you could do is that you say instead of like duplicating data because I for example, joined an item table to a header table and thus I'm duplicating each row in the header. You could just say, well, but I only want to have like the last row or the number of items most of the time, if you have activities, you want to have the first or last activity, something like that. You think about some aggregations that you want and that is that is the idea. If you want to prevent duplicates, that is that use case that you just thinking about aggregated information rather than having each item. The other thing is that you would be also able to filter on aggregated information. For example, you would be able to to filter on cases that have more than five activities. That would be a very weird thing, but you could think about more elaborate things that you filter on stuff that has like a complex process and maybe the number of activities might be indicator for process being complex or easy.
Thanks a lot for explaining this. Hopefully it will help to a new data scientist or wannabe data scientist when they start working for Celonis. So Simon, essentially now we have the dashboards and everything ready. Is there anything else that you need to do afterwards as well, or is it just you basically do the transformation, you have the data, you have the data model, dashboards are ready, you give it to users and that's it.
Well, that would mean that I talk to business, I do my transformations, I do the KPI. Everything works perfectly. Thus I give it to the customer and walk away. But I want to give the customers also time to validate and tell them how perfect my analysis are. But most of the time they tell me, okay, look, here's a KPI and it should show the number one, but it shows the number two and then try to figure out why or if two is even correct. Right. So there's a data validation phase. There's the data validation phase and it's also a very tricky part because how to validate data in the first place. So yeah, basically there's data validation, user acceptance, testing and, continuous development after the point. So it's a never ending journey, right? So you don't just deliver an analysis, you want to have it validated and even if it is validated and even all the numbers are correct, they are always things that you could improve on.
Simon, if you can talk from your personal experience, could you recall some very challenging technical implementation that you would say today that you are proud of, that you actually made it.
Yes, I guess. I can't just throw out customer. So there was one customer. I stay as vague as possible to try and tell my achievements. So there was a customer that had not that long ago that had a very custom system or I guess they even invented themselves and they wanted to have very challenging entities in there that didn't exist. Actually, they they have to be calculated. They had a whole bunch of documentation and the first time I saw the documentation they gave us, I said, Oh, yeah, sure. I read through it and I thought, oh my God. I made it in the end. And they are happy. Actually, we had a value creation. Value creation, I knew the word. They seemed quite content and was actually nice to hear that. They say, Okay, not only did we define all those things and we thought it's very challenging to do that, but we also really see what we what we want to see. And so that was quite nice that it worked out in the end.
So there is nothing as good as a project well done. And if you actually have a certain praise from the customer, it's for me at least personally, that's very rewarding. And while mentioning a good story, do you also have some a bad story in a in your in your mind that you could share?
Yeah. Unfortunately, it was also related to a very customer specific tool. And I don't know, there were certain expectations and somehow it was always a back and forth that we don't see what we want or there should be more like there should be more information. And since it was very customer specific, I couldn't just like do the guessing game. And I mean, I tried, but I failed again and again and I needed more information. It didn't work out in the end. It was not very rewarding. I also kind of blamed myself a little for that, but in the end, I guess it was like just that the communication kind of didn't work out as well. I guess it wasn't even technically impossible to show what they wanted. But at some point the communication just failed. So it was rather human error that it's impossible. It was no unsolvable quantum physics formula. It was just the human error that it didn't work out.
So as per usual with process mining, it's a technical job, but once you don't have the human communication in place and all sorted out, you can still end up in failure. And Simon, since we are running out of time and us we are shooting this before working hours and since some of us have a meeting soon so that the our project don't end up in a bad case scenario just as your did, we will probably have to end it right here. So Simon once again, thank you very much for finding time for us and for walking us through a technical implementation. I really hope that it can help, especially the aspiring data scientists or the customers who are trying to do the implementation one day on their own as a guiding, guiding principles. I will just say thank you very much for tuning in today. As usual, you can write us an email at email@example.com . We also have a website, miningyourbusinesspodcast.com So feel free to reach out and will be as usual, looking forward to next episode, which will be again in two weeks. So stay tuned and thank you for listening. Simon, Patrick thank you very much. Talk to you later, bye bye.
Thank you, everybody.
Thanks for having me again. Bye.
Discover more episodes
The Rise of Intelligent Machines: Insights on AI and Unstructured Data from Wolfgang Kratsch
March 15, 2023
What is Product Mining? With Maximilian Kissel, Co-Founder & Managing Director at Soley
March 1, 2023
Simulation with Process Mining & the Story of Apromore with Marcello La Rosa, CEO and Co-Founder of Apromore
February 1, 2023
Brewing Success with Process Mining in HEINEKEN with Tim Bosman, Karolina Szczurek, Mateusz Gulinski
January 18, 2023
GET IN TOUCH
Begin Your Process Mining Implementation
With over 350 process implementations, we know exactly what the crucial parts of a successful Process Mining initiative are
Data Science Team Lead