Search

Trending publication

AI in the Workplace: Balancing Potential, Risks, and Regulations for Employers

Print PDF
| Legal Advisory

Since ChatGPT was released in late 2022, and the subsequent release of Claude, Bing AI, Llama, and Bard, the use of generative artificial intelligence (“AI”) has dramatically increased. Unsurprisingly, employees now use AI at work to complete work assignments that range from the routine (organizing slide decks) to the complex (preparing memoranda and client deliverables). This is no surprise given AI can increase productivity and cut costs.

Despite its enormous potential, the use of AI at work is not without its risks for employers. First, all AI tools have the possibility to generate false answers, otherwise referred to as “hallucinations.” Hallucinations occur because AI models are trained on data available on the internet; however, if the data is incomplete, biased, or false, the AI model may learn an incorrect pattern and “predict” the incorrect answer. Accordingly, there is no means to definitively guarantee the quality and accuracy of work generated by AI. Second, the legal landscape for the regulation of AI is incredibly fragmented. Over a third of states have enacted comprehensive data privacy laws which include a provision regulating the use of AI in profiling, data privacy, and accountability of the AI models. The state regulation available today is not harmonized and is often at odds with regulation set forth in the European Union, Israel, Brazil, China, and elsewhere.

Given that employees are understandably attracted to the use of AI for efficiency at work, it is almost certain that employees will use AI regardless of whether employers are directing their employees to use such products. Therefore, employers should implement a comprehensive AI policy given that the failure to do so could lead to (i) incorrect work product being distributed to clients and/or (ii) violations of applicable laws and regulations.

An AI policy should address at a minimum whether your company seeks to encourage or limit the use of AI at work. Companies in highly regulated industries (for example, financial services or healthcare) should consider restricting employees’ use of AI to certain limited functions. Similarly, companies that maintain and store client sensitive data (law firms, accounting firms) should be mindful of what data is potentially uploaded to open system AI. If client sensitive data is uploaded to an open system, it is likely that a company and/or its employees have inadvertently breached confidentiality obligations and, further, violated applicable laws and regulations.

Companies with lower-risk profiles (marketing/branding/fashion) may wish to encourage the use of AI to improve efficiency with the strong caveat that such companies should educate all employees on how to effectively use AI and the applicable legal, ethical, and moral considerations.

Given the above, all companies, regardless of industry, should address the following key criteria in a company AI policy:

  1. Use of AI is subject to each employee undergoing AI training. AI training should cover how AI is trained and that hallucinations exist.
  2. All AI-generated work should be treated as a first draft that should always be reviewed by a human before circulated outside of the company.
  3. Employees should be aware of regulations applicable to their industry and understand that their usage must be limited to permitted work.
  4. Employees should be trained on company confidentiality obligations inclusive of obligations relating to client data and company trade secrets. Given that the most popular AI platforms are open system platforms, employees must understand that uploading any company data onto such a platform may put such data into the realm of public consumption. Accordingly, companies should expressly prohibit employees from inputting confidential or sensitive information onto an AI platform. Furthermore, companies should ensure that all employees are using AI with the appropriate settings to mitigate the risk that their interactions are used to improperly train the AI (a capability available with the enterprise version of AI or additional changes to baseline settings).
  5. Companies who need to track the source of data utilized in AI platforms should regulate reliance on open AI given that current AI platforms utilize large data sets with complex algorithms which may not be able to provide concrete and accurate answers as to how certain data is sourced. Therefore, usage of untraceable data may violate applicable data privacy risks and rules.

If a company chooses to allow AI in the workplace, employers would be wise to develop a comprehensive AI data policy with the advice and counsel of legal counsel familiar with such company’s particular regulatory concerns and jurisdictional reach. Nutter attorneys are able to assist employers in navigating these new developments.

This advisory was prepared by Portia Keady and Elizabeth Myers in Nutter’s Corporate Department. For more information, please contact the authors or your Nutter attorney at 617.439.2000.

This advisory is for information purposes only and should not be construed as legal advice on any specific facts or circumstances. Under the rules of the Supreme Judicial Court of Massachusetts, this material may be considered as advertising.

More Publications >
Back to Page