Recently, applications using Large Language Models (LLM) such as OpenAI’s ChatGPT, Microsoft’s Bing AI, and Google’s Bard have been growing in popularity. These tools are fast, easy to use, and available to anyone. It is not surprising that employees of large and small companies have started to use them. There is nothing wrong with that, but one important thing should not be omitted.
ChatGPT and similar machine learning-based applications are gaining traction in corporate environments. They are used for a variety of purposes:
“LLM-based systems are already contributing to improving corporate content, helping employees with various tasks, and even participating in decision-making processes,” says Martin Lohnert, a cybersecurity specialist at Soitron; however, with the adoption of these disruptive technologies come certain risks that users and organizations are often unaware of because of the initial excitement.
The immediate benefits of LLM-based tools are so great that curiosity and excitement often outweigh caution. Nonetheless, there are several risks associated with the use of ChatGPT in companies:
When using ChatGPT in a corporate environment, personal or confidential data may be inadvertently shared. Users often enter this data into the tool without knowing that it is shared with a third party. A case has already been reported where a bug in ChatGPT allowed users to see other users’ data, such as chat history.
LLM training is based on the processing of large amounts of diverse data of unknown origin, which may include copyrighted and proprietary material. Using any outputs based on this data may lead to ownership and licensing disputes between the company and the owners of the content that was used to train ChatGPT.
Computer code generated by artificial intelligence (AI) may have vulnerabilities or malicious components, which can lead to subsequent use and the propagation of such vulnerabilities in corporate systems.
AI tools of the current generation sometimes provide inaccurate or completely incorrect information. There have been cases where the outputs had distorted, discriminatory, or illegal content.
Using and sharing incorrect ChatGPT outputs in corporate communication can lead to ethical and reputational risks for the company.
Given these risks, it is essential to define the rules on how employees can (and should) use ChatGPT when doing their job. “A corporate policy should serve as a compass to guide the company and its employees through the maze of AI systems ethically, responsibly, and in compliance with laws and regulations,” says Lohnert.
When defining a corporate policy, it is first necessary to determine what technologies it should cover. Should the policy apply specifically to ChatGPT or to generative AI tools in general? Does it also cover third-party tools that may incorporate AI elements or even the development of similar solutions?
A ChatGPT policy should begin with a commitment to privacy and security when working with similar tools, and it should set boundaries by clearly defining acceptable and unacceptable uses of the technology.
It should define uses that are permitted in the organization without restriction. “This can include various types of marketing activities, such as reviewing materials for public use and generating ideas or initial material for further development,” says Lohnert. In doing so, it is important to carefully consider the legal aspects of possible intellectual property infringement and be cautious about the known pitfalls of inaccuracy and misinformation.
The second group of rules which the policy should include involve scenarios where use is allowed with more authorization. Typically, these are cases where the output from ChatGPT needs to be assessed by an expert before it can be used (e.g. computer code).
The third category involves scenarios where its use is forbidden. This should include all other uses, especially those where users enter anything containing sensitive data (e.g. trade secrets, personal data, technical information, and custom code) into ChatGPT.
An LLM use policy should be “tailored” to each company after thoroughly identifying any associated potential risks, threats, and impacts. “This will allow your company to quickly harness the potential of the new AI-based tools, while formulating a strategy to integrate them into the existing corporate environment,” concludes Lohnert.
We are in the process of finalizing. If you want to be redirected to our old version of web site, please click here.