Rules necessary

6 questions that companies should ask themselves about the use of GenAI

Before companies introduce generative AI, they should ask themselves a few questions to ensure that the new services do not jeopardize data protection and data security. Forcepoint reveals what these questions are.

Most companies have now recognized the added value of generative AI and want to introduce corresponding services to relieve their employees and make processes more efficient. In many cases, employees are already using some of the tools – which, like a hasty introduction, is associated with risks, particularly for data protection and data security. To avoid these risks, companies should first find answers to the following six questions:

Ad

Which AI tools may be used?

Companies must carefully consider which tools may be used at all. This can only be done in close coordination between specialist departments, the IT department, security teams, data protection officers and legal experts. The specialist departments suggest tools or formulate functional requirements, as they know their specific day-to-day challenges best. Security teams, data protection officers and legal experts then assess the risks, such as whether there is a threat of data protection violations from offers outside the EU, while the IT department checks the feasibility.

Are there guidelines for the use of AI?

Guidelines provide employees with clear instructions on which AI tools they may use and how they may use them. Among other things, they specify how to handle personal or confidential data when using AI. In addition, it is essential that the guidelines also define exactly which employees and departments they apply to and which tools and areas of application they apply to. Last but not least, they clarify responsibilities and liability issues, such as who makes decisions about AI and is responsible for compliance with the guidelines – and what happens if data protection or security breaches occur.

Have the employees been trained?

Employees are often not even aware of the risks associated with the use of AI tools or that the tools are not infallible. In training courses, they can gain experience and learn how to use the tools in accordance with the guidelines. They also learn to question and check the output of the AI in order to identify any prejudices or errors.

Can AI use be restricted to approved tools and authorized employees?

Ideally, companies should not only set guidelines for AI use, but also be able to technically monitor and enforce compliance. Blocking unwanted AI services via URL and DNS filters is not enough, as these can be circumvented and there are simply too many alternative AI offerings. It is better to use a uniform security solution that includes services such as Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA) and Secure Web Gateway (SWG) to ensure that only verified and approved tools are used, and only by authorized employees – regardless of their device or location.

Can the outflow of sensitive data be prevented?

Despite training, employees can be careless in their day-to-day work and enter personal or confidential data – especially during stressful work phases. Data security solutions prevent this by monitoring data entries and uploads and intervening in the event of data protection or security breaches. In the case of minor breaches, a warning is usually sufficient to draw the employee’s attention to the problem. In the case of serious breaches, however, the transfer of data to the Internet is blocked so that important financial data or valuable intellectual property such as source code or design drawings do not leave the company. The prerequisite for this, however, is that companies have an overview of their entire data stock, across all storage locations. Data security posture management (DSPM) helps to create visibility and to identify and classify sensitive data across the company and proactively eliminate potential data security risks.

How can uncontrolled growth of guidelines be prevented?

Companies should ensure that the various security solutions work together optimally and use a single, centralized policy set. If the policies have to be maintained separately in all solutions, this not only causes an enormous amount of work for the security team, but there is also a risk of divergent policies that leave gaps in protection. In addition, a central set of rules for all security solutions helps to prevent data protection and security breaches not only in AI tools, but across all channels through which data can leave the company, such as cloud services, SaaS services, web, email, external storage media and end devices.

“The use of AI needs rules, otherwise there is a risk of shadow IT that jeopardizes data protection and data security,” emphasizes Fabian Glöser, Team Leader Sales Engineering at Forcepoint in Munich. “And of course, companies must also be able to enforce these rules in order to prevent intentional or accidental breaches.”

(pd/Forcepoint)

Ad

Weitere Artikel