Where better to start than with the subject itself…

I headed to the ChatGPT website and gave it the prompt: ChatGPT benefits and negatives for employers. Within seconds, I had seven pros and seven cons. The benefits included cost-effectiveness, 24/7 availability and increased efficiency. The pitfalls included initial development costs, limited understanding and a lack of emotional intelligence, and security concerns.

ChatGPT is an artificial intelligence tool which can generate text about almost any prompt you ask it to. Since its inception, it has been used by students to write essays, creatives to inspire songs or poetry and job seekers to assist with their applications. It’s used by workforces, as people use it to draft documents, edit text, and generate ideas. Whilst there are obvious benefits to this, for both employee and employer, there are also some significant challenges.

In an effort to avoid the risks presented, several high-profile companies, including Goldman Sachs, Samsung and Amazon, have banned the use of ChatGPT in the workplace. This is a drastic (and potentially impractical) step, however, particularly as a recent survey conducted by Fishbowl found that around 70% of those interviewed who used ChatGPT in the workplace did so without their boss’ knowledge. It is important that employers are aware of the potential risks associated with employees using ChatGPT in the workplace and how these can be managed.

So, what should employers be thinking about if ChatGPT is used in the workplace?

Reliability: ChatGPT cannot judge the reliability of the information that it accesses, therefore it would rely on information from a personal blog on WordPress in the same way as it would an article in a well-known publication. This can potentially lead to misinformation being spread.
‘Hallucination’: ChatGPT can ‘hallucinate’ certain information, as a lawyer in the United States recently found out when his paralegal submitted a brief with entirely fake case citations.
Discrimination: Employers should be mindful that ChatGPT uses real-world data. This means that it can potentially reflect the inequalities and biases of the real world.
Data protection: Anything that is entered into ChatGPT can be used, retained and accessed by OpenAI (the company that created ChatGPT). Therefore, if an employee inputs sensitive information in order to draft a client letter, for example, this could potentially be a data breach.
Copyright: OpenAI does not have intellectual property rights over information produced by ChatGPT, however there may be copyright issues if the work is disseminated beyond the workplace and is sufficiently similar to information that is already out there.

What can employers do to reduce risk?

Maintain a human element: Where employees are using ChatGPT in the workplace, it is important that a ‘human element’ is maintained. Any work created with assistance from ChatGPT should be checked. Human input can turn what may be a good starting point into an accurate, detailed, engaging document that has a personal touch and organisational context.
Policy: Employers should consider introducing an ‘Artificial Intelligence’ policy or incorporating AI into an existing IT policy. This is the ideal way of addressing the company’s position on ChatGPT, whether that is to ban it completely or to set parameters around its use at work. Having a clear policy will help ensure everyone understands ChatGPT and the employer’s rules around it.
Training: Employers should consider providing training on ChatGPT, including how to use it, when it may be appropriate, and the potential pitfalls.
Transparency: It is important that employers and employees are transparent about the use of ChatGPT and document when it has been used.

There are clear benefits of ChatGPT in the workplace, however employers need to be aware of the downsides and have a clear policy as to how and when it can be used in the workplace.