Everyone is talking about generative AI: More and more people are using it, more and more experts are raising a warning finger. Those responsible in industrial companies are also asking themselves where and how they can use ChatGPT-like technologies sensibly.
Augmentir, provider of a software platform for connected workers that works natively with AI, has the answers to what is important for AI assistants and co-pilots.
ChatGPT is a success story. As early as November 2023, 78% of Germans had heard or read about the AI chatbot. A third had already tried the tool and 13% even used it frequently, as a Statista survey shows.
Companies are also increasingly considering how they could benefit from the technology. At the same time, awareness of the problematic aspects of its use is growing. Reports of false or fictitious answers are raising doubts: should companies steer clear of AI chatbots altogether? – Absolutely not, says Carsten Hunfeld, Director EMEA at the industrial software platform Augmentir: “However, there are a few things to consider when selecting and using digital assistants.” According to Hunfeld, five criteria are crucial:
1. relevant, reliable database
Wherever the safety of people is at stake, generative AI assistants must be one hundred percent reliable. This is only possible if the universe of data that such an AI-based chatbot or co-pilot works with is limited to the company’s own information. This should be relevant to operations and, above all, up-to-date. – This is a world of difference to conventional bots, which use all the data available on the internet up to a point in time X! This is because there is a lot of information there that is not relevant to the respective user company and its processes or contains outdated information.
2. clearly defined access rights
The second key difference is that an AI for industry knows its users and the context. It knows who is asking and what authorizations, skills and competencies the person in question has. This is because workers, specialists, supervisors or specialists in a particular field have different rights in a professional environment. Their access to content is therefore linked to their roles, to fixed rules and often also to processes. This serves two main purposes: Firstly, confidentiality is maintained. A temporary employee, who may only be with the company for a few weeks, does not come into contact with company secrets or content that is crucial for competitiveness. Secondly, this guarantees compliance, i.e. the legally compliant and process-compliant display of content. Being able to set up and manage roles and access rights is therefore the second important building block, alongside a flawless database, to ensure that AI-based assistants do no harm.
3. targeted content and recommendations
Unlike the widely used bots, industrial assistants do not rephrase the answers to questions every time. This would also not make sense for changing a tool or putting on personal protective clothing. Instead, they access content that has already been created and tagged accordingly. This can be a work instruction in the form of a standard operating procedure (SOP), an entire training module or just a single, specific video lesson. Circuit diagrams, drawings or operating instructions are also part of a possible content pool. The AI answers questions in natural language based on the context and tailored to the specific employee. Which content is used depends on the employee’s skills, experience and certifications. The AI can also suggest recorded experience, for example from the – documented – exchange between two colleagues who discussed and solved this problem at an earlier point in time. In any case, it is important to prevent the AI assistant’s advice from overwhelming a person or putting them in dangerous situations, for example when troubleshooting problems with a system.
4. verification of sources
The company’s own knowledge base is constantly expanding – not least thanks to the employees themselves. For example, a plant operator who is entrusted with maintenance tasks and has a question can now contact a specialist via a mobile app. Such a conversation, including the solution to the problem, can easily be recorded and summarized for the company’s knowledge database using AI. However, this newly acquired content should not simply be made available to every user. Security-relevant content in particular must be urgently checked and approved before publication. An important quality feature of an AI assistant for the industry is that it can differentiate between checked and unchecked content and apply appropriate rules. For example, when fresh content can be shown to a specific person making a request. Whether this is considered unproblematic and safe very often depends on their level, role and authorizations. Of course, it is also important to define a workflow for checking and approving new content. Ideally, this should take place on the same platform that hosts all content and the AI algorithms.
5. prevent hallucination
ChatGPT & Co have been criticized above all for their hallucinations. Although generative AI produces convincing results in the form of text or images, some answers are not based on facts and content, but are made up. This is also referred to as confabulation. Now there are areas in which this is annoying at best, but does not cause any immediate damage. The situation is different in the corporate environment, where there are risks to the health and safety of people, where systems could be seriously damaged or brought to a standstill. Here, the accuracy of the information must be 100% reliable. This is where there are major differences between generative AI for the mass market and systems specially developed and trained for industry. This is because users who are able to reliably expose false statements thanks to their expertise will only come into contact with potential hallucinations.
Conclusion
Generative AI is a great help for employees who want to be provided with relevant information for their work simply and easily. It is up to companies to ensure that the AI bots used earn the trust of the workforce at all times and also meet all duties of care and precaution by providing an appropriate database and access rights for the preparation, labeling and approval of content. Assistants that are used in industry should therefore also be developed specifically for industry.
Carsten Hunfeld, Director EMEA, Augmentir