Leveraging the Potential of Generative AI While Maintaining a Secure Enterprise

By Howie Xu, Vice President of Machine Learning and AI at Zscaler.

  • 1 year ago Posted in

Generative AI has been thrust into the spotlight this year with the emergence of ChatGPT, both wowing and frightening people thanks to its ability to create quality content in a matter of seconds. While the foundation of the Large Language Model technology has been around for a while, it has finally reached a tipping point this year – moving beyond the cute but not terribly useful AI chat tools that consumers have experienced to date. For the first time with ChatGPT, consumers can actually feel the big evolution of the technology and start to understand how it can make their jobs easier and allow them to focus on tasks that require more thought.

I truly believe that Generative AI ChatGPT could be this decade’s iPhone or Netscape moment – fundamentally changing the way we work and the way we live. In the next few years, businesses will have to adapt their practices and processes to accommodate the use of generative AI.

However, as with all new digital solutions, companies need to understand not only how best to use them to create advantages, but also their limitations in order to protect against any possible danger or issues. Data and data privacy in particular are hugely hot-button topics, so security has to be front of mind for any company considering generative AI in the near- or not-so-near future.

Creating ‘super staff’

The buzz around ChatGPT and generative AI is currently so pervasive that many employees will assume the technology has been mature enough and could threaten job roles if organizations were to begin implementing it into everyday workstreams. While it certainly is the most evolved and impressive form of AI that the general public will have seen up to this point, it certainly isn’t the finished article. There are many limitations to ChatGPT which prevent it from working without human intervention. We are not at a stage yet where we can implicitly trust anything that ChatGPT produces, as it is only able to summarize information it has accrued through the internet. It will be able to pull from multiple sources which might not be fully accurate and, therefore, its resulting content may be laden with errors or inaccuracies.

Anyway, this is why you always need to have a human in the loop if you strive for accuracy. A human expert needs to be able to review the content being produced and make any necessary changes to ensure the content is correct before it is distributed.

That said, although it may not be fully formed yet, if an employee understands its limitations they can still find ways to leverage it to support their role in a company – becoming a more productive and efficient version of themselves – a ‘super’ version, if you will. And that is something that will be hugely tempting for employees and employers alike.

From a security perspective, this kind of support is crucial in the fight against unknown threats that are introducing new or exploiting old risks in modern enterprises. Machine learning and AI’s advanced

technology helps mitigate the risk of unknown threats. And in the short time it takes for those models to identify an unknown threat as an actual threat to be blocked, security is improved for organizations. That means AI is enabling a cyber technology to be better than it was before. However, in order to be able to do this, organizations have to be holistic in their approach to using those technologies. For example, conventional AI in a security platform is able to determine what a malicious piece of software is. With ChatGPT, we may come up with a full-blown report on this piece of malware with higher confidence, after cross-checking other data points too. So, for me, the next thing in cyber is using AI as part of a holistic and interconnected approach to security.

The potential negative impact

If generative AI like ChatGPT can boost the performance of staff, it can also do the same for hackers. For example; traditionally, if somebody sent a phishing email to employees pretending to be a senior member of staff asking for a money transfer, the likelihood of them fooling people would be low due to security education and the existence of a number of tell-tale signs of it not looking credible. However, generative AI can personalize the email, using data on each target that it has pulled from the internet, without requiring any extra effort from the attacker. By dropping in references to a recent personalized holiday or pet story, generative AI could make it much harder for each person to see through the lies and make them more likely to respond positively to the phishing attack.

There are also potential pitfalls internally as generative AI is another solution that will need to be bolted onto an organization’s digital infrastructure, which can expand the attack surface for attackers to exploit. Data leakage is already a potential concern. Almost every big tech name is currently creating its own OpenAI-like services and organizations will have to choose which to go with for their business. The solution of their choice will then need to be granted access to the intranet in order to fully support staff, and additional (IoT) devices may need to be provided—both in the office and for those working at home—to help support whatever day-to-day task the AI tool is picking up. That is another load of devices and solutions for the IT security team to keep watch on, and a whole new avenue for attackers to exploit to gain access to a business’s data and assets.

Secure to succeed

Although generative AI may improve efficiencies internally, broadening your attack surface and opening your business to increased sensitive data leakage or external attacks seems like a dangerous risk. But if generative AI is as revolutionary to the workplace as the browser and the smartphone were before it, can organizations really afford to ignore it in the name of security?

If you have the right security architecture in place, you don’t need to fear adding more devices or solutions from any provider, or increased employee risk. While a traditional security architecture would allow external attackers to access any asset inside its security walls, a zero trust architecture would isolate the device being attacked and prevent it from providing access to anything else. So there is no fear in adding a greater number of IoT devices, for example, around the office or at home, in order to provide generative AI assistance to staff. Even if one is compromised by either a proactive attack from

external players or an accidental security issue caused by an employee, the zero trust architecture allows those devices to be isolated from the wider intranet and minimize the attack surface.

Generative AI can also be a support to security teams as mentioned above. Using the wide and varied amount of contextual data available to it, AI can make human-like decisions at a much higher speed and scale than a human would be able to operate. It will enable cybersecurity teams to be more agile than before.

Conclusion

There may still be much more to come, but the promise of ChatGPT and the generative AI technology is evident to many businesses. For early adopters who are already finding ways to implement it within security and business teams might be able to reap the benefits in the present. But with any new technology, there will always be dangers from the outside looking to exploit any weakness in the solution, so you need to ensure you have a security architecture that is capable of adapting quickly to new products and can ensure the digital transformation process is seamless.

By Steve Young, UK SVP and MD, Dell Technologies.
By Richard Chart, Chief Scientist and Co-Founder, ScienceLogic.
By Óscar Mazón, Senior Product Manager Process Automation at Ricoh Europe.
By Chris Coward, Director of Project Management, BCS.
By Trevor Schulze, Chief Information Officer at Alteryx.