AI-Powered: Corporate Strategies Versus Hacker Tactics
How can companies use artificial intelligence to respond more efficiently to threats, and what innovative attack methods are hackers developing with the help of AI? Our author provides a concise insight into the future of cybersecurity.
The threat landscape in the field of cybersecurity has always been subject to significant changes, requiring companies to continuously adapt to new threat scenarios. Now, the use of artificial intelligence (AI) offers numerous opportunities for both threat actors and companies. It is therefore not surprising that many surveys and studies show that German companies currently consider AI-supported attacks as the greatest cyber threat.1
This article examines the use of AI both from the perspective of companies, which can significantly increase the efficiency and effectiveness of analyzing large volumes of data through AI and from the perspective of attackers, who can create new attack vectors and improve existing techniques using artificial intelligence.
Companies: Implementing AI Into Cybersecurity Strategy
Proactive threat and anomaly detection are essential for identifying cyber risks. By integrating large language models (LLM) into cybersecurity software, routine tasks and the analysis of extensive log files can be made significantly more efficient and effective. This relieves the already scarce human resources in the IT departments of companies, which must deal daily with analyzing ever-growing data volumes. Using AI in this area allows for the automated classification of false alarms and provides the opportunity to evaluate truly relevant alert messages and artifacts with human intelligence and take appropriate actions. The automated detection and isolation of threats could in the future thwart attack attempts more quickly and enable a more efficient response to IT security incidents.
This is particularly advantageous for small and medium-sized enterprises, which cannot ensure 24/7 availability of in-house IT security staff – as experience shows, most targeted attacks occur outside regular working hours. This way, smaller companies can also develop customized security profiles, evaluate large data volumes in a targeted manner, and derive the right measures to protect critical company data.
However, this integration comes with several challenges. These include the need for specialized expertise to implement and adapt AI algorithms and ensure compliance with applicable legal regulations. Additionally, companies must ensure that their AI-supported security solutions are continuously updated and adapted to new threats.
Hackers: Using AI for Innovative Attack Methods
Cybercriminals are constantly seeking new ways to bypass security systems and exfiltrate sensitive company data. The offensive use of AI lowers entry barriers and has emerged as the most promising technology for hackers. With the help of AI, cybercriminals can create automated attacks, design personalized attack strategies, and even deploy adaptive tactics that adjust to the defense strategies of companies. This allows them to exploit vulnerabilities more quickly and respond more effectively to countermeasures from the cybersecurity industry. The Darknet already hosts the first unchecked chatbots trained for criminal purposes.
Even though AI does not yet have the potential to independently write advanced malware, it is already possible today to modify existing malware using LLMs to make automatic detection more difficult. The automation of password attacks with AI is therefore a staple in the toolkit of hackers.
The increasing accessibility of deceptively realistic deepfakes (images, videos, and voice messages generated by artificial intelligence) will significantly increase the execution of successful social engineering attacks, including phishing and vishing (deception by phone). These digital threats have the potential to trap even well-prepared companies in the attackers' web, as the human factor remains the most important gateway for successfully executed attacks. Initial cases where employees were deceived in manipulated video conferences using deepfakes and prompted to transfer payments worth millions have already garnered media attention2, showing the high damage potential of such attack techniques.
As of today, generative AI models are already actively used by threat actors, enabling efficient, realistic output of a range of results that support their criminal activities. These include tools like WORM GPT, FRAUD GPT, or WOLF GPT.
The use of AI for successful attacks on AI systems is another threat that will grow in the future. Many companies are increasingly implementing AI applications, for example, for informed decision-making, to support the automation of processes, or as an internal knowledge platform, to name just a few application examples. An underestimated danger here is the embedding of AI-based attack strategies in AI models capable of exfiltrating, manipulating data, and influencing AI-based decisions in the interest of cybercriminals. This offers hackers a variety of possibilities, the extent of which is currently difficult to assess.
The Future of Cybersecurity in the Era of AI
The use of artificial intelligence is indispensable, especially as AI will further establish itself as a base technology (see also the impact of AI on the cyber threat landscape v1.0, BSI).3 As with any new technology, there are opportunities and risks that must be closely monitored.
Artificial intelligence will continue to improve threat detection and response compared to conventional, sometimes static, security methods. Time-consuming routine tasks can be automated. The analysis of large data volumes is possible in real-time. Behavioral analyses and unusual actions can be detected more quickly. Of course, there are also challenges that should not go unmentioned. Besides quantity, the quality of the training data is an important success factor. Inaccurate or wrong decisions or conclusions, for example, if a real threat is ignored, can have fatal consequences for the security situation of a company. Additionally, AI is still not able to replicate human intellectual abilities such as intuition, empathy, critical and abstract thinking. The human factor will be decisive for the success of implementing and using AI.
The integration of AI into cybersecurity has already changed the dynamics between companies and hackers today, creating new challenges and opportunities for both sides. Companies must continue to focus on proactive approaches to detect potential attacks as early as possible and initiate adequate countermeasures. At the same time, they should invest in the implementation of proactive cybersecurity and AI-supported security solutions. For the eventuality of a successful attack, companies should also be prepared and, for example, have coordinated processes with external service providers for immediate assistance in case of damage.
Decision-makers in IT security are advised to engage intensively with the use of AI and ensure that its implementation in their own cybersecurity strategy becomes a fixed component. Despite the existing limitations, in the future, it will be nearly impossible to manage and control the multitude of cyber risks without the use of AI.
Security experts should also be involved in the implementation of AI applications in the company to ensure, in addition to legal aspects, a thorough consideration of security and technology aspects.
As the cybersecurity threat landscape continues to evolve, companies need to harness the power of AI, leveraging its capabilities as part of their cybersecurity strategy and proactively defend against threats. As cybersecurity professionals, it is our responsibility to adequately protect organizations, employees, and our society from cyber criminals and threats from cyberspace.
1. Bitkom Research 2024
2. https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
3. https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/How-is-AI-changing-cyber-threat-landscape.html
Sign up to receive all the latest insights from Ankura. Subscribe now
© Copyright 2024. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC., its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.