ChatGPT has become a global phenomenon since it was launched in November last year, with a reported 100 million people logging into it by the end of January. At Air IT, we recognise that ChatGPT can be an incredibly useful tool when used appropriately but can pose some risks. In this article, we explore the potential and risk of this AI language model for businesses.

ChatGPT

What is ChatGPT? 

ChatGPT is an AI language model created by OpenAI, that interacts in a conversational way and can be used to generate content in a way that seems like it was written by a real person. It can be used to provide answers to many specific queries, generate coherent and high-quality texts in a wide range of styles, topics and language and can even help write code.    

ChatGPT is a powerful tool that can be used by businesses to enhance their systems and processes. It serves as an effective means to promptly respond to queries, extend customer support, and even curate content for websites and blogs. It can also be integrated in chatbots, virtual assistants, and various other AI-based applications. As this is a constantly developing technology, it’s important to monitor ChatGPT carefully to observe any potential risks.  

 

Benefits of using ChatGPT for businesses 

Tools like ChatGPT can open up significant possibilities for companies who use it in a strategic manner. ChatGPT can enhance the human work process by automating mundane tasks and delivering more engaging interactions with users. Here are a few of the ways businesses can use tools like ChatGPT effectively:   

  • Helping with research  
  • Brainstorming ideas   
  • Supporting content creation  
  • Writing and understanding computer code   
  • Automating interactions with customers  
  • Translating languages  

Many businesses have a significant opportunity to improve their operations by focussing on customer service. ChatGPT has the potential to offer round-the-clock customer support and provide solutions to frequently asked questions and common issues. This can lead to enhanced customer satisfaction and reduced customer service costs for businesses.  

 

The risks of using ChatGPT for businesses  

With the powerful capabilities ChatGPT has, and the opportunities it can create for businesses, the technology does pose some risks surrounding security, accuracy of responses, plagiarism and can even be used by cyber criminals to exploit critical business data.  

 

Sharing sensitive or confidential information   

There are concerns that any information that is provided to ChatGPT becomes the intellectual property of OpenAI. The NCSC has recently announced that currently, these models do not disclose your information to other users. However, it’s crucial to note that the organisation offering the service, in this case OpenAI, can see your queries, which means that OpenAI may have access to the contents of what you’ve typed in. 

As AI language models are becoming more popular there is a risk that the information and data stored online might be hacked, leaked or made public. To avoid ChatGPT accidently sharing critical business information with others, we strongly recommend not to share any sensitive information or data that is owned by your company, such as client, supplier or employee information, or any confidential documents or contracts.  

 

Phishing emails  

ChatGPT has the ability to create human-like content in a conversational way and can be used to generate content in a way that seems like it was written by a real person. Although this is an extremely powerful capability, it could also mean that it could be used in a criminal way.   

One of the most common signs of a phishing email is bad spelling and the incorrect use of grammar. Cyber criminals can combat this issue by using ChatGPT to craft more convincing phishing emails. By taking simple instructions, ChatGPT can write a genuine looking email to potentially obtain critical contact and financial information.   

Cyber criminals are much more likely to target those who lack security knowledge than IT professionals who will recognise a phishing attempt. Cyber security awareness is critical so that your employees understand the risks, know how to spot threats and take the right actions accordingly.  

 

Creating malware 

If ChatGPT can write code – what is stopping it from writing malicious code?  

ChatGPT can detect and reject requests to write malware code, as with many other requests that may seem harmful or criminal, however, cyber criminals can easily get around it. By providing a detailed explanation of the steps to write the code instead of a direct request, ChatGPT will fail to identify this as a request to write malware and can effectively write it.   

 

Accuracy and originality of responses 

ChatGPT can only provide users with information prior to the year 2021, as it’s training stopped in the year 2021. This can cause inaccurate responses that can’t be trusted. We recommend using ChatGPT as a source of inspiration and feedback – but not as a source of information.  

When asked “What is the latest iPhone model?” ChatGPT responded “As of my knowledge cut off in September 2021, the latest iPhone models were the iPhone 13, iPhone 13 mini, iPhone 13 Pro, and iPhone 13 Pro Max, which were released in September 2021. However, there may be newer models that have been released since my knowledge cut off.”  

In some cases, ChatGPT is unable to interpret and adjust to unique queries which could result in inaccurate responses. If ChatGPT is unsure of the answer, it will still try and answer the questions in the best way it can. We recommend making sure that you are checking any information for inaccuracies before you use any content provided by ChatGPT. This could potentially harm your reputation as a brand if you are publishing/sharing inaccurate/false information.  

ChatGPT also cannot confirm that it doesn’t plagiarise content so please be aware that anything which is produced could be plagiarised and, in some instances, you could be breaking a law by publishing this content as your own.  

 

What are we doing as a strategic partner?  

As this is a constantly developing technology, at Air IT we will be monitoring ChatGPT carefully to observe any potential risks and have created guidance for our employees that will outline the ways in which this can be used responsibly.   

 Our CIO/CISO, Lee Johnson, says:

Due to the nature and speed of the developments with ChatGPT, we expect all colleagues at Air IT to keep up to date with the guidance we have set out to help mitigate security risks and minimise the possibility of publishing inaccurate information that could potentially harm our reputation as a brand.

We will not be restricting access to ChatGPT as we know that it has a lot of potential value for us as a technology company who are looking to be innovative in the ways we work and the solutions we offer our clients.

“We recommend that the security concerns mentioned in our article are considered when using ChatGPT within your own personal or organisational use, along with the advice from the National Cyber Security Centre.

Stay ahead of the curve 

Although there is a lot to be mindful of when using ChatGPT, we believe that this new technology can offer value to business working practices when used responsibly.  

If your business is using ChatGPT and you would like to find out more about what we are doing to monitor and observe potential risks, please  get in touch or speak to your Account Manager today and we will be more than happy to discuss. 

Get in touch with us