The media has been full of reports recently about ChatGPT. This is a system that uses Artificial Intelligence and Machine Learning to extract data from the web and present it back to the user in a manner requested by the user – ‘write me a marketing plan for a software company; write me a sonnet in the style of Shakespeare; write me a piece of malware which encrypts hard drives’. The output is impressive, but still a little raw. And while the system resists requests to craft malware, it can be spoofed. So, like all innovations, this power can be both used and abused. Is this an opportunity for our clients to improve their cyber defences? Or is it yet another risk?
Both, neither, either, yet.…
Trawling the internet for information can be a deeply frustrating experience. Search terms and parameters don’t retrieve the right information, or the information available doesn’t quite fit the question. Faulty use of ‘boolean’ search techniques also tends to produce information of a surprisingly limited nature.
The abilities of ChatGPT appear impressive. It seems to provide an intelligent interface that not only interprets and understands the question but also presents back a reply in the form the user requires, rather than a list of content that might hold some of the answer. And if the hype is anything to go by, Machine Learning will improve the output considerably over time.
For a company like Astaara, and our clients, ChatGPT can certainly help: it can facilitate rapid retrieval of information on vulnerabilities and mitigation measures; help monitor risk exposure across our client base; and help sift through large amounts of extraneous data for valid information about data breaches – victims, actors and motivations. At its best, it can democratise search and provide users with better answers, leading to better decisions.
On the flip side, however, ChatGPT could become a significant threat. Its democratisation of search will allow bad actors as well as good more efficiently to mine the internet for useful data about their targets; help sharpen their tools e.g. develop more effective ways of ‘deep-fakery’ – making that phishing email so full of target corporation jargon that makes spotting the difference more difficult; and build malware payloads more quickly.
What should our clients do about this?
While this technology is still evolving, it is already showing its potential. At this point, you should consider, at the very least, the following areas of activity:
Bottom Line
Boon or menace, this technology will not go away. Just as bad actors are investigating how it can help them make others’ lives miserable (the system can still be manipulated into producing weapons grade content), so must good users understand the opportunity this capability can provide to help build better systems.
This technology is not a substitute for thought and reasoning. Users must continue to learn, and deploy their own powers of judgement, reason and common sense. Just because the answer comes out in a particular way doesn’t mean it is right – or a conclusion is wise. Just as the possibility of malicious use increases as the machine learns how users think and respond to different stimuli, so can users develop better defences.
Capabilities such as ChatGPT make computer user training, education and awareness raising ever more important. Users need to be trained in what to look for to be able to discriminate between good and bad behaviour, content or even code. To do this they need skills. But we also need those tasked with defending assets in cyber space to know how to use this kind of AI as a protector.
We cannot uninvent this technology any more than we can forget how to split the atom. We need to recognise that there will be both malign and benign users. While it is often the case that bad actors enjoy first mover advantage, there is now an opportunity to make ChatGPT a significant force for good. But despite these advances, don’t forget the basics – and please don’t hesitate to get in touch if you would like to discuss how we can help you to manage your cyber risk.