16 February 2023

How AI has changed Cyrex’ Penetration Testing

One of our recent articles spoke about ChatGPT and its impact on the world of technology. In particular, we focused on how AI technology is changing game security. Now it’s time that we speak on how our security engineers are adapting to penetration testing with AI technology.  

Cyrex’ Attitude to ChatGPT

While it can be used for a multitude of reasons, the Cyrex team always approaches new technology with an open mind. Our goal is a secure and stable gaming world. That cannot be achieved by resting on our laurels! That kind of result is a constant battle of growth and innovation. With something like ChatGPT on the scene, our team saw how it might be used. We’re always ready for the next potential tool and we’ve been using ChatGPT regularly since two days after its launch!  

Let’s dive into those two aspects, how something like ChatGPT can aid and hinder the gaming world. And how the Cyrex team of expert security engineers are looking out for new ways to utilise its abilities and hamper hackers using it for ill.  

What can ChatGPT do in penetration testing?

Currently, when using ChatGPT with penetration testing, it shines its brightest alongside White Box services.  

Regardless of access to source code, ChatGPT has huge potential. It can analyse code and tell you its probable function, it can look at existing code and point out possible vulnerabilities, and it can even be given an API request and offer suggestions for testing and payload. When code returns an error, it can even analyse that and offer potential fixes. ChatGPT does however struggle with or fails at larger amounts of code. The bigger the code, the more likely it will be unable to deliver anything.   

And for its negative uses, ChatGPT is dangerously effective for malicious actors generating new exploits. Anti-virus systems for example work similarly to ChatGPT by recognising patterns. But exploits and viruses created by ChatGPT are not based on any existing pattern, they’re bespoke.  

It’s very important to remember that ChatGPT is just a collection of knowledge already available. It will deliver results of varying validity but with absolute confidence. Our security engineers have found it to be a fantastic assistant, but they understand it cannot be trusted absolutely. It’s important for anyone using its services to verify its answers with a reliable, human-created source.   

Chat GPT and Reverse Engineering Assembly Code

Whenever code is written, whether in a high level language like C#, Python, or Java, or in a more lower level language like C++, there is a process that follows it. Compiling the code, removing any useless data the system doesn’t need – all in the sake of compressing it for speed and efficiency.   

The code we write and see isn’t really what our system speaks to. At the lowest level, it runs the compiled assembly code, which is all incredibly repetitive mathematical operations and variables being moved. Naturally, given its scale, complexity, and repetitiveness, humans struggle to understand it. But ChatGPT, a machine built on recognising patterns has no such issue. It is incredibly efficient at this task.  

The main issue, however, is that assembly code is usually very large. And we’ve established that ChatGPT struggles more and more as the code gets bigger. Assembly code also jumps around significantly from its vast code. This, mixed with the size, make it quite difficult for ChatGPT currently.  

ChatGPT and Obfuscating Code

For figuring out and bypassing obfuscated code, it’s all a game of time and patterns. For security, obfuscating code works similarly to compilation. Removing unnecessary data, minimising information. Variable names no longer relate to one another. It’s an entirely anti-human security measure.  

It is a time intensive task for humans. For ChatGPT, it is a non-existent issue. It is built on pattern recognition and doesn’t care what variable A or B is called. Only that their purpose and functions are matched and correctly linked.  

The issue is that obfuscating code is a standard measure used in gaming. It makes it difficult for both malicious actors and for security engineers to get in (for Grey and Black box testing anyway!). This extra time for us to navigate the code is added project length and budget requirements. ChatGPT allows that time to be cut down significantly. However, it also allows hackers to bypass it just as swiftly. Due to ChatGPT’s existence, this method of security has lost a serious degree of effectiveness.  

These are just a few of the uses and impacts of ChatGPT on penetration testing and cybersecurity. Keep an eye on our pages and our website for more on ChatGPT and how it’s changing the digital world.  

If you’d like to benefit from our innovative methods and comprehensive security experience, get in touch. We are the gold standard for game security and are always ready for the next challenge!