Gartner: Accelerating Growth – Generative AI 

Gartner: Accelerating Growth – Generative AI 

By Austin Miller

And here we are! The final entry in our breakdown of what to expect from the world of IT in the coming years, using Gartern’s Top Strategic Technology Trends for 2021-5. While some of these breakthroughs may still be on the horizon, others are already becoming standard processes in the largest enterprises. For cybersecurity teams, looking at these developments – of which many are mainly focused solely on operations, at the moment – the opportunity to automate, expand, and improve capabilities should be the top concern for people looking to experiment today! 

But before we can sign off on this series, we need to look at the final article in Gartner’s predictions – Generative AI. Turning artificial intelligence and machine learning capabilities up to 10 is the basic underlying principle here, so this is more of a philosophical change than a purely technological one. Gartner predicts that this productive improvement will increase tenfold over the next few years (from the current 1% of all data collected to 10%) and the possibilities that it brings for secpros working today. 

What is Generative AI? 

Just like Autonomic Systems start to amend and change themselves to suit their environment(s), Generative AI is applying that logic to artificial intelligence and machine learning. As said above, we are turning to capabilities of AI/ML up to 10 to make sure that predictive models are effective as possible. As of right now, this approach is mostly used for operational or sales tasks, but the possibilities are very clear for people working in data science or dealing with large amounts of data in cybersecurity. 

Generative AI is an improvement to current AI models that allows for better generation of new, original, and realistic artifacts based on sample data. “But that already happens, Austin!”, I hear you say – of course, but the step forward isn’t in the creation of models; it is in the ability to create a realistic likeness of the sample data on a large scale that doesn’t repeat itself. The end goal of this process is to place Generative AI at the centre of an “engine of rapid innovation for enterprises”, allowing for better performance in data analysis, operations, and product development. 

How does this work in the real world? 

Theory’s great, but we need real life results! Thankfully, the UK Financial Conduct Authority has announced that it has implemented this kind of data collection and processing in its operations. 

Having built a sample of five million records of real payment data, a synthetic dataset was built using Generative AI. The organization is expecting that this dataset will be able to cut down on the review process and create stronger, more responsive fraud models that a) are more accurate and b) do not compromise the data of individuals within the dataset. 

What does this mean for cybersecurity? 

Predictive models are a good basis for cybersecurity professionals to work off, but they have their limitations – sometimes the adversary is unpredictable or will use alien tactics that fall outside the expected techniques that they have been associated with in the past. Bad news for existing tools, right? Well, it’s not quite as severe as we initially thought. 

Using Generative AI, cybersecurity can implement defense and attack techniques that detect and stop these adversarial tactics. From a defensive aspect, the usefulness of the technology isn’t only in its ability to spot threat actors that are currently attacking the system (although that is a strength which can be expanded upon as the system becomes more robust), but in actually assessing the attack surface and spotting vulnerabilities. These advances can be laid out in three parts: 

Accelerated threat detection 

Better AI means better response to emerging threats. Attack surface management techniques can be driven by AI networks and asset mapping/visualization software that allows for the automated creation of computer-intelligible and human-intelligible data. Some organizations (that are still keeping their cards close to their chests) have extended this capability to increase capabilities for spotting irregularities and problems in their supply chain. With a cooperative security model that extends from the major enterprises to the humble supplier, we could be looking at a more holistic approach to threat management. 

Cutting down the boring stuff 

No one likes managing the boring stuff. People say they do, but I don’t believe them. Using AI/ML to improve our ability to manage the boring stuff and then using Generative AI’s capabilities to have that process self-manage – to an extent – means that cybersecurity professionals will be able to turn their hands to development and innovation, not log management. 

Evolving the human role 

Don’t worry – I’m not about to imply that transhumanism (the study of enhancing human capabilities through technological implants and the like) will start with cybersecurity professionals. Instead, we are looking at the ability to effectively discriminate and manage false positives, false negatives, and other difficult to manage data points. 

AI is not capable of replacing humans in cybersecurity – yet, I will add ominously! Instead of creating services which sit alongside educated professionals engaged in cyberwarfare, efforts through Generative AI will be enabling and enhancing the role of the cybersecurity professional to do their job with less effort, more accuracy, and great job satisfaction. The applications for a SOC are clear to anyone, but the possibility for developments is still there on the precipice of innovation.  

Stay up to date with the latest threats

Our newsletter is packed with analysis of trending threats and attacks, practical tutorials, hands-on labs, and actionable content. No spam. No jibber jabber.