<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=323483448267600&amp;ev=PageView&amp;noscript=1">

AI-Powered Efficiency

Season 2 Episode 3

AI-Powered Efficiency

espresso icon Espresso 4.0 by
Wizata

 

AI-Powered Efficiency with Alexandra Merkel

In this captivating episode of the Espresso 4.0 Podcast, host Filip Popov welcomes Alexandra Merkel, a leader in AI applications from Innomatik, to delve into the transformative potential of artificial intelligence in industrial settings. As they explore the pressing challenges faced by manufacturing and facility management sectors, Alexandra sheds light on how AI-driven solutions, like large language models, are revolutionizing operations. She breaks down how AI can upskill workers and streamline processes to ensure productivity across various industries, from predictive maintenance to leveraging past fault reports for real-time issue resolution.

With a focus on the practical application of AI, Alexandra also discusses the strategic use of on-premise models to ensure data security, overcome biases, and learn how AI can personalize workflows even in complex environments with hundreds of unique assets. Whether you're an industry leader looking to innovate or simply curious about how AI can be scaled and customized for specific business needs, this episode offers valuable insights into digital transformation and the future of AI in manufacturing. Tune in to discover how AI is reshaping the landscape of industrial efficiency and innovation.

Dive into the conversation

Filip Popov (00:00)
Hello and welcome to another episode of Espresso 4 .0. Today, I am joined by Alexandra Merkel. Hello Alexandra. Thank you for coming.

Alexandra Merkel (00:10)
Hello, thank you for having me.

Filip Popov (00:12)
Excellent. An Innomatik mug.

Alexandra Merkel (00:14)
Innomatik. Yeah, we organized one extra for this day.

Filip Popov (00:19)
That was very fast, considering we just met last week. So, well done to your marketing team. And yeah, thank you for joining me for coffee today.

Alexandra Merkel (00:31)
Yeah, thanks for having me.

Filip Popov (00:34)
Pleasure. Why don't you start us off by saying what it is that, what is the problem that Innomatik solves, and what it is that you guys do?

Alexandra Merkel (00:46)
Well, Innomatik is part of a larger group of companies, and that group of companies has been established in facilities management, maintenance management, and energy management for large industrial complexes. It can be buildings, but industrial sites. And in that context, we saw that it's very difficult to add new technologies. There could be things like augmented reality, virtual reality, and now, obviously, particularly artificial intelligence. You can't just put artificial intelligence into a company environment and expect it to work. And that's why we established Innomatik to have for these users, but also for other customers, test cases to see how they can make use of artificial intelligence projects in an industrial setting or a professional setting. So, that's basically what we do.

Filip Popov (01:44)

Absolutely. Excellent. Now, in terms of AI applications in such organizations, can you enlighten us on some specifics? From what I know, you guys also work with large language models. So, I'm curious how manufacturing companies can leverage that technology in their operations. Can you enlighten us a little bit about examples of how that would work?

Alexandra Merkel (02:10)
Yeah. Sure. I mean, the one thing everybody talks about obviously is the predictive maintenance, but that goes often into individual assets. And we see it from a bit of a larger perspective, looking at, for example, fault reports. You've got many people reporting faults, and then on the night shift, nobody really knows what the cause of the fault is. And then they go and find someone to help themselves solve the problem. We, with large language models that are hosted on-prem, normally go into fault reporting software, for example, or the ledger or the book where it's written down. We look at these and see what caused the incidences in the past. For example, if it's due to different things being produced, different stuff on site, or different maybe scheduling and these types of things. And from the fault reports, we can also see what the solutions were in the past. So, people actually have a chance to try out different things that have worked in the past to maybe fix the fault on a short term or even on a longer term. So they don't actually have to go and find someone, but basically to help themselves on site whilst they are on shift. That's one of the things we do.

Filip Popov (03:37)
Excellent. Absolutely. That's very interesting. Something that's coming to me immediately after you've said that is that they can look in the past, maybe from people who worked in their stations before, right? Maybe they're not at the same level of experience and skill as their predecessor, who had more experience. So, that leads me naturally to the next question into the application of maybe these types of solutions when it comes to upskilling people when it comes to handling not only maintenance but also potentially operating machines. Can you expand on that?

Alexandra Merkel (04:14)
Yeah, That's very true. I mean, there are a lot of things; maybe a new person or a new person on-site needs to learn to be able to be fully productive, especially if they're at a higher turnover rate of people. So, maybe someone's leaving or some, that's what everybody experiences, older people leave or retire, and then we have lots of younger staff that are not as experienced, maybe not as well educated sometimes as well and these need to become productive quite quickly. So, there are different things. One is we use language models to analyze all the data there is, for example, things like manuals and reports and stuff that's happening. And so you have the processes. So, you have the possibility with a language model to actually ask interactively, what do I do in the process? How do I go about a certain task? These types of things. And then we're also working on trying to introduce also the more personal data, when someone who has been working on-site for a very long time knows things that are not formally written down onto paper, because he knows that there are certain things you have to do first before you carry out your maintenance, for example. So, we are working, for example, on the way that an AI actively asks for this information and also introduces it into the process. So, basically, what we do is try to use the large language model to upscale workers and make them productive more quickly.

Filip Popov (06:10)
Understood. That's interesting because I wondered basically how, what you would use to train, and where that database of knowledge would even come from. But you're basically prompting users once they have solved the issue to describe it. And I suppose, in practice, that will work through some sort of a chatbot.

Alexandra Merkel (06:36)
Exactly. I mean, the first data that is already there is documents. There is lots and lots of documentation on how to do the tasks on hand, how to do the routine tasks, how to maybe carry out certain maintenance tasks, these types of things. One of the problems is that these documents tend to be very, very long. For example, one of our customers has a standard maintenance for some certain equipment that has like 200 pages you have to actually obey. So, the first thing we do is have the language model look at the document and basically summarize all the tasks that are necessary in the right order. Because if you have something like 200 pages, you have things you have to do once every so often and other things you have to do twice a year. Other things you have to do three or four times a year. So, you have got things you don't need to do at a particular point in time, for example. And the first thing is obviously to get the language model to analyze the documentation. And then once you know that this is being carried out, you get the chatbot basically to talk to the person who is doing the work or who's done the work and get him to describe if there are things missing, for example. You add this so you get a full workflow. You could think of the large language model in terms of a student coming in who knows a lot of things but doesn't know how to handle the task at hand in this company or how this is being done in this place. So, basically, the large language model is a new apprentice who asks a lot of dumb questions.

Filip Popov (08:18)
Asks a lot of dumb questions and never forgets the answers. And never gets bored, too.

Alexandra Merkel (08:29)
Exactly, that's the idea. That's also true, yeah, yeah, it's always attentive.

Filip Popov (08:37)
Excellent. So, in fact, you go a level deeper, and you kind of assign weight to different tasks in terms of criticality and priority depending on the matter at hand or what the actual action required is or maintenance and so on and so forth. Is that my understanding correct?

Alexandra Merkel (08:55)
Yeah, that's exactly the right thing. Yeah, that's describing it great.

Filip Popov (08:59)
Excellent. And the obvious application from that point on is helping plan your resources on when you're going to do what and how and so on and so forth. Not to mention the streamlining of information instead of just doing research and development by itself on trying to understand what you're supposed to do, you immediately have that information at hand and actually can start executing it.

Alexandra Merkel (09:26)
Yeah, The most important benefit is obviously you don't have to go and search your database or your file system for the right documents and the right version and these types of things because there are all those studies that say, you know, normal workers or people spend 20 % or even 30 % of their time on actually looking for information. And, you know, this is a lot of time and a lot of money if you think about it.

Filip Popov (09:52)
Absolutely, that's a very good point. I was going to ask you about the numbers, but this is actually very, very revealing. Speaking of these projects, what could be potential challenges for a company looking to start? What are some of the potential trappings that should be avoided and should be accounted for before starting?

Alexandra Merkel (10:16)
I think there are two things that are quite important. We see in our customer base that it's not a great idea to use the really big, large language models that are hosted somewhere on the planet because you're talking about business information and using artificial intelligence to make sense of the data. And that's proprietary data and particularly in the production environment or industrial environment, that data allows you to know a lot about what's going on within the company. And that's obviously something maybe the competition would like to know as well or anyone else. So, you have to be quite careful what you do in these types of things. And the way we go about it is we host the models either on-prem when on the customer side or in the private cloud to make sure that information doesn't leave the environment where it belongs. So, this is one of the things that are obviously quite important.

Filip Popov (11:24)
And in terms of, apart from cybersecurity, I believe in our talks you've mentioned the biases. When can you, I'm a little bit of a layman, maybe you can explain it.

Alexandra Merkel (11:34)
Yeah, The biases are also one of the things because, especially with the really huge language models, they are trained to do everything, and they're trained to do everything quite well, but they're also in the public domain. So, you have to be very sure that the things going on around you are actually true. So, we all know about biases. So, humans have biases. The data it's trained on is biased in a way. So, that is inherent in the model. And then what large corporations have done in the past is they have tried to restrict the information coming out of the system to get rid of the biases. So, one of the very prominent cases in the past was the problem they had with the Gemini model, where they all of a sudden had black Second World War soldiers from the Germans, which is a pretty obvious problem. And they tried to get rid of the white bias in there, having a result that is really dreadful. This is very obvious. But these biases also happen in a more subtle way, where you don't really see that the problem is this big. You don't see certain things going a bit haywire. So, the idea is that with the small models, you have a better handle on them. You are sure of the models. They are not as complicated. So, they tend to be a little bit less biased. And what you can do, particularly with things like retrieval augmented generation, you can pinpoint where the data is coming from. So, when you have certain aspects of the maintenance, for example, the model will show you where in the papers or in the documents it actually says what you need to do. So, you've got a direct link to the original document, and you can actually look at it and see, okay, this is where my data or my information is coming from. So, you are actually sure of what the model answers. So, the answers aren't as biased because you have the links. So, you can actually go for it.

Filip Popov (13:53)
Did I understand it well? So, the workaround is actually to limit the size of the model?

Alexandra Merkel (14:02)
Yes, this is one of the things I think is also going to happen in the future not to have these huge models. Our model, for example, is trained for maintenance data. It's never going to be very good at writing poetry, for example, but in an industrial setting, you don't need it to write poetry. So, these models are more limited in their general capabilities but are very well trained in what they are meant to be doing. So, they're great at one thing but not great at everything. So, this is the big difference, but that also helps with the biases.

Filip Popov (14:37)
I understand. Okay, well, maybe it's not a terrible idea to have maintenance people that are also poets.

Alexandra Merkel (14:42)
Yeah, of course. Sure. But, you know, these types of things, I mean, it just happens on the training, basically.

Filip Popov (14:51)
Yes, I am. I'm just joking. Okay, this all sounds very interesting. So, I am, let's say, a company or factory that wants to deploy a project like this. What do I need to start?

Alexandra Merkel (15:05)
Well, there are two things, really. One thing: you need some sort of data. What we do with our customers is we often do a pilot project. So, we go in and look at what sort of data they have. Because you need good data, you need current data to cover the whole use case. So, if you only look at past data and things are changing over time, the data might not be good enough. But if you look at documents, this is quite a good thing because documents, for example, have a timestamp. So, you know from when a document is and you assume that maybe the most recent documents are the correct ones. And then, we look at the data with our customers because the other thing is they need a decent use case. I mean, you need to have a certain use case to sell a project internally, obviously, and then you need to also be able to calculate some return on investment on your use case. So, these are the two things we look at individually because that's just an individual thing. It depends on what sort of data you've got. It depends on how the data is stored and how well it's organized. It depends really on what you want to do with it. And then, you can go into a use case. For example, you know, you want to make sure that your maintenance managers are up to speed with all the documents is one of the examples. Or maybe the product managers want to know more about all their different products there are. And they may not have a product database for this, for example. It happens quite a lot in not huge companies, but maybe smaller companies. They don't have a product management tool, but they have lots and lots of documents on all the different products somewhere in the file system. This is one of the easy use cases where you can say, okay, I want the language model as a chatbot to talk to my documents to get questions answered. This is sort of simple use cases.

Filip Popov (17:08)
Okay. And I do have a bonus question for you before we wrap it up. Where would you deploy such a solution? How would you want to consume that information? A, and then a follow-up immediately. How will we scale it? Imagine a factory that has multiple departments or, more importantly, multiple different facilities that want to use that.

Alexandra Merkel (17:35)
We would go in with a single organization or organizational unit where you find it easier to have people who are excited about this because you always have to bring in the people who are sort of working hand in hand with the system and find it interesting and worth their while. And then it's fairly simple to scale it up from there. If you look at the chatbot, maybe it's even going to, you know, introduce itself to other parts of the companies. We've had this as well because some users actually pass their user onto other people to see, you know, look, I can find my information now a lot easier within the file system rather than going through the different documents. And then it goes on from there. It depends on how you structure a use case and how you want to start with this.

Filip Popov (18:27)
Go, understood, understood. That's the human element of things. Now imagine that I want, I have, let's say, a machine, a pump or a motor or whatever, of which I have documentation, but I happen to have 500 of those same pumps. Sometimes, oftentimes, they suffer from the same failure, but sometimes, that 437th has a different failure that only happens in that one, right? So, how would we scale the model to account for these things? First of all, to account for the quantity. And second of all, to account for the discrepancies or the uniqueness, I suppose.

Alexandra Merkel (19:09)
Well, I suppose they are all individuals. I mean, we know that they are individual pumps. We can even do this via our sister company, which does digital twins, where we have them as spatial computing things. So, you know which pump is or where. And you know that this pump is the one that always fails in a different way. Why? Whatever. For whatever reason. Or this pump has a different problem. So you can individualize the common data source. So, you've got a common data pool for all the pumps, maybe, and then something that's individual for certain pumps or in certain environments. Maybe it doesn't depend on a pump but on the manufacturing they're doing there, the product being produced, or something like that. So, it can be in the environment or something. So, you have a whole lot of information for each individual, and you can scale this to whatever you want to do.

Filip Popov (19:49)
Understood. Okay. Thank you very much. Thank you very much. That's very enlightening. What we've come to? I've come to the end of my coffee cup. Admittedly, it's very small. So, I wanted to thank you for taking the time to speak to me and to our audience. And, yeah. And I hope to stay in touch and we can maybe grab a coffee sometime in the near future.

Alexandra Merkel (20:31)
Yeah. Yeah. Thank you very much for having me. It's been a pleasure. And yeah, sure. We can catch up sometime in the future. See how it goes. Thank you. Thank you. Bye.

Filip Popov (20:39)
Absolutely. Ciao.