In an era propelled by technological advancements, employees are increasingly using artificial intelligence (AI) tools for various work-related purposes, such as creating sales and marketing e-mails and copywriting, to name a couple. One of the most popular AI tools is ChatGPT, a machine learning language model developed by OpenAI. ChatGPT has proven to be a powerful and resourceful tool that has taken the world by storm. Just five days after it first launched, ChatGPT gained one million users. Since then, the ChatGPT website monthly visitor traffic is estimated at approximately one billion, with an estimated 100 million of those being active users. Suffice it to say that some of those users are employees using ChatGPT for work-related matters.
While not every business may allow employees to use ChatGPT for work, businesses should nevertheless implement a ChatGPT use policy that informs employees of the company’s position and provides appropriate guidelines so that employees are educated on what is expected from them to comply with this policy.
For businesses that permit employees to use ChatGPT for work, having a ChatGPT use policy covering their use is essential to ensure responsible use and mitigate risks. This policy should be drafted in a digestible manner that establishes transparent standards, provides clear guidelines, and educates employees on what is expected of them to comply with this policy. A few areas a business should consider including in its ChatGPT use policy include the type of user information collected by ChatGPT, its flaws, company information that cannot be used with ChatGPT, and the consequences for policy violations, each of which are further discussed below.
User Information Collected
AI Generated Content Is Not Flawless
ChatGPT has proven to have many benefits, but it also has some critical flaws. Particularly, it may generate inaccurate, incomplete, misleading, or wrong information. OpenAI disclosed this flaw early on, and some of these flaws are prominently displayed on the user dashboard when first logging in to ChatGPT. A company ChatGPT use policy should include these flaws so that employees know that any AI generated content they receive could potentially be inaccurate, incomplete, or wrong. In addition, this policy should include a directive requiring employees to fact check AI generated content for accuracy, completeness, and truthfulness.
Confidential Information Should Be Off Limits
To protect confidential company information, a business should have its ChatGPT use policy clearly indicate the types of confidential information employees are restricted from inputting and uploading files containing such information to ChatGPT. Arguably, there is no legitimate business reason for any employee to input or upload files containing the personal information of other employees, the personal information of company customers, company financial data, confidential contracts, or company trade secrets. In addition, businesses and their employees need to remain mindful that ChatGPT, like any other system or software, can have devastating vulnerabilities. Vulnerabilities can be exploited, and an exploit can lead to a leak of information. This has already occurred with ChatGPT.
Back in March 2023, OpenAI confirmed that ChatGPT suffered a data breach arising from a bug that caused a leak of user information stored in its internal database. The type of user information leaked included some user chat history. User chat history includes every chat a user had with the AI including the user’s prompts, data the user inputted, files the user uploaded, and the AI content generated in response. Apparently, other users of ChatGPT had viewing access to such user chat history. In addition to leaking user chat history, the bug revealed the payment-related information of 1.2% of users with a ChatGPT paid subscription plan, including their first and last name, email address, payment address, credit card expiration date, and the last four digits of their credit card number.
Considering the above, and to advance company-wide efforts to safeguard confidential information against unauthorized disclosure and use, a company’s ChatGPT use policy should cover, in clear and concise language, the company’s restrictions on the use of confidential information, a description of such confidential information, and the risks associated with inappropriate disclosure.
As should be done with all company policies, a ChatGPT use policy should notify employees of the type of consequences they can face for violating this policy. By outlining policy violations, businesses are in a better position to manage their compliance obligations with applicable state and Federal laws and address risks that could impact their operations, reputation, and bottom line. Furthermore, a policy violation section promotes consistency and fairness for handling violations, which helps to mitigate the perception of bias or favoritism. Moreover, a well-defined policy violation section provides employees with clear guidelines on what is expected of them to comply with this policy.
Whether a business bans or permits employee use of ChatGPT for work, businesses should implement a ChatGPT use policy to inform employees of the company’s position, establish clear standards, and provide concise guidelines so that employees are educated on what is expected of them. Moreover, implementing a ChatGPT use policy places businesses in a better position to manage employee behavior, comply with applicable data privacy, security, and protection state and federal laws, and mitigate potential risks.
The information provided in this article is for general informational purposes only. Nothing stated in this article should be taken as legal advice or legal opinion for any individual matter. As legal developments occur, the information contained in this article may not be the most up-to-date legal or other information.