ChatGPT – what does it mean for your Workplace?

ChatGPT – what does it mean for your Workplace?

A year ago, virtually nobody outside the world of AI had heard of ChatGPT. Now, it and its fellow applications seem to be everywhere and there is no doubt that they are becoming more sophisticated at an increasing pace.

Proponents have described these applications as tools which humans can use to make their jobs easier. However, with typical human ingenuity, the uses to which they can be put encompass both good and bad. In May 2023, Geoffrey Hinton, a pioneer of AI left his job at Google and issued a warning about the potential for harm caused by “bad actors” using AI systems including both ChatGPT and image creation programs like Midjourney.

Given the rapid rise of these AI applications, it is highly likely that employers are already encountering issues surrounding them and it is almost certain that some of those issues will be legal ones.

What is ChatGPT?

ChatGPT is a generative AI application developed by OpenAI that can search millions of online sources very quickly and use what it finds to generate text which, due to its predictive ability to link words and concepts, appears - at first sight - to be very human-like.

Users find this useful as it can help reduce the number of menial tasks that make up a proportion of every employee’s role. However, there are of course areas about which employers need to be careful.

What are the limitations of ChatGPT?

We have already mentioned that the application searches millions of online sources – its bank of information – and uses this to draft the text it supplies in response to queries. However, as the application states on its website, it does not include information from anything more recent than 2021 and therefore, content that it produces could well be outdated. Unless the user is fully aware of developments in the sector, they could take ChatGPT’s word for it and end up using inaccurate information.

Additionally, the application seems not to be discerning about the sources it uses to produce content. It appears not to be able to spot bias and may include that in its output.

It has also been programmed not to transgress set ethical boundaries, so asking it to supply information that could be used to break laws or transgress specific social mores will likely fail. This will, however, depend on the social and cultural biases of the humans setting these parameters in the first place.

Where might ChatGPT be used in the workplace?

Applications may include marketing (generating copy quickly and in volume) IT and coders/developers (generating new code and checking existing code), legal (drafting and summarising cases) graphic design (making use of other AI applications such as Midjourney to generate professional imagery) and generally across an organisation to stimulate creative processes and refine/redraft existing low-quality copy.

We can expect many more examples to be added to this list as time moves on.

ICO guidance

The Information Commissioner’s Office (ICO) has produced comprehensive guidance on AI and data protection and it is worth a business familiarising themselves with this as in the event of any breach of the law regarding AI and its use and applications, the ICO will be the regulator who gets involved in any action taken.

What are the risks when using ChatGPT?

Many of its terms, conditions and policies state that its developers can use content provided to it to develop and improve the way it performs. This means that confidential client and/or employee information may end up appearing in content produced for other users. It is important, therefore, to make sure that this will not violate contractual provisions with those clients about how their data can be used. If there is any doubt, then ChatGPT should not be used with that data.

Employers will need to consider either ruling out the use of personal/client data during ChatGPT content creation or adding updates to privacy policies, obtaining client/employee consent or give them the opportunity to opt out (and ensure that this is adhered to as the consequences of not respecting consent/opt-outs could be serious).

Copyright

The general principle of copyright law in the UK is that only original ‘works’ that are the intellectual creation of the author are protected. Where works are created by an AI tool, the question is ‘who is the author that created the work, the AI tool or the person that is behind the AI generated content?’ At the moment, UK law would prohibit the AI tool itself from owning the copyright. Whether this will change in the near future is unclear. If it did, it would represent a big shift in how the UK treats copyright works and the ownership of them.

The benefits of - and reasons for - a policy

Given the pace of change, employers need to ensure that they have a policy in place that attempts to codify their position on the use of such applications and future-proof it by using best guesses and flexible wording to anticipate and head off any unforeseen developments. Not everything can be predicted, of course, but doing nothing is a far worse course of action.

What should be in a ChatGPT usage policy?

When thinking about what should be included, there are several key areas that need to be considered:

  • As we mentioned above, AI systems make use of a wide range of data and this will include information of a personal nature. Therefore, to reassure people about the use to which that information is being put, the policy should set out how the data used by AI systems will be collected, stored and protected. The company needs to provide details concerning this to both employees and clients. This will include details about who can access the data and the means by which it will be stored securely.
  • When considering the outputs and decisions of AI systems used in the workplace, strong emphasis should be placed on who bears the responsibility for those outputs and decisions and who will be held accountable if there are any issues that arise from those decisions and outputs.
  • There are a number of ethical considerations surrounding the use of AI in the workplace. Any policy should take these into account and legal advice from specialist lawyers should be sought if the employer is unsure about this.
  • All policies need monitoring and revising if circumstances change and this is particularly so for an AI policy, given the rapid evolution of the sector. This process of monitoring should assess whether the AI systems being used are working satisfactorily and that their use has not caused any unforeseen consequences.
  • If the employer does not want its employees to use ChatGPT for whatever reason, they need to make it clear that this is not allowed. OpenAI produces tools that can be used to assess a piece of content and say whether or not it’s likely to have been produced by ChatGPT, so employees should be left in no doubt that any breach of the policy is likely to be detected and the consequences of such a breach should be made clear in the policy.

A coherent policy that is fit for purpose also needs to address ways of reducing the risk posed by ChatGPT. This means looking at how ChatGPT is used in the organisation and when this needs to be either restricted or controlled. This will likely mean categorising the uses of ChatGPT into several levels of authorisation, perhaps along the following lines:

(1) Prohibited

(2) Permitted with authorisation from a designated person

(3) Permitted without any authorisation.

The designated person’s job and responsibility will be to make the judgements about what ChatGPT output can be used and what must remain prohibited. Each level of authorisation needs to be populated with criteria and examples so that employees can make informed referrals and don’t simply send everything to the designated person, who would soon be swamped.

It is important for both internal and external clients that they know the provenance of any output produced and therefore, the policy should set out a system of labelling to more clearly and easily identify what has been created using ChatGPT, and this can be used to advise clients accordingly.

Providing employee clarity

It is not just the employer who may be concerned about the rapid pace of change and what this might mean for the company. Employees may have started to experiment with ChatGPT and, impressed by its utility, want to use it further and more often, but may be unsure of what they can and cannot do with it in a workplace setting. They will doubtless gain a good deal of reassurance from a policy that sets out things in black and white.

Training

Many employees will probably know very little about this rapidly evolving sector, and consequently may be looking for direction and education. For the employer, a programme of training on both the legal, ethical and practical implications and responsibilities is absolutely essential to accompany any policy, as it is likely that a policy document will contain references to concepts and phrases with which employees may be unfamiliar. It is also a safety mechanism for employers since they can use the training provision and the employee’s attendance to show that they have discharged their responsibilities when it comes to explaining employee responsibilities, duties and consequences of any breach.

About the author

This blog is provided by our specialist litigation and commercial law partner, rradar. This article is for general guidance only and aims to provide general information on a relevant topic in a concise form. This article should not be regarded as legal advice in relation to a particular circumstance. Action should not be taken without obtaining specific legal advice.