Skip to content

Professionals in the cyber field advocate for a temporary halt in the implementation of AI technologies, due to chaos surrounding the securing of associated tools.

Over one-third of security professionals acknowledge that advancements in generative AI are outpacing their abilities to effectively control and manage it.

Experts in cybersecurity advocate for a temporary halt in the implementation of AI, as they...
Experts in cybersecurity advocate for a temporary halt in the implementation of AI, as they struggle to maintain the security of currently used tools.

Professionals in the cyber field advocate for a temporary halt in the implementation of AI technologies, due to chaos surrounding the securing of associated tools.

In the rapidly evolving landscape of technology, one area that stands out is the adoption of generative AI. However, this innovation has also exposed a gap between innovation and security readiness, according to Gunter Ollmann, CTO at Cobalt.

More than a third of security leaders and practitioners admit that generative AI is moving faster than their teams can manage. This rapid adoption has led to concerns about agentic AI rewriting the rules of risk and the foundations of security needing to evolve in parallel.

Enterprises are worried about agentic AI security risks. In fact, more than seven-in-ten (72%) cited generative AI-related attacks as their top IT risk. Organizations tend to prioritize quicker, often simpler fixes for Large Language Model (LLM) vulnerabilities. However, the resolution rate for high-severity vulnerabilities found in LLM penetration tests is a concerning 21%, which is lower compared to other categories of penetration tests.

To effectively manage AI-related cybersecurity threats and secure LLMs, a multi-layered approach is necessary. This approach includes a combination of governance, continuous monitoring, threat detection, training, and mitigation of specific AI-enabled risks.

  1. Establish AI Governance and Risk Management Frameworks: Organizations should create comprehensive policies governing AI use, including defining responsible AI deployment, compliance adherence, security protocols, and situational oversight by cross-functional teams (e.g., IT, legal, compliance). This governance should include continuous risk assessment and framework updates to keep pace with evolving AI threats.
  2. Continuous Monitoring and Behavior Analysis: Due to the expanded attack surface of AI systems, continuous monitoring of AI-generated network traffic and model behavior is essential to detect abnormal patterns or covert attacks early. Using generative AI for improved threat detection can enable identification of subtle anomalies and deviations from normal behavior, which traditional tools may miss.
  3. Threat Detection and Incident Response Automation: Generative AI enhances the automation of security operations such as log parsing, vulnerability scanning, alert triage, and even creation of tailored remediation scripts. This facilitates faster response times and reduces human error and burnout.
  4. Regular Security Assessments and Testing: Conducting adversarial testing, including simulated attacks (e.g., adversarial inputs or model manipulation attempts), helps identify vulnerabilities in LLMs. Regular audits should include checking for model poisoning, inversion attacks, and privacy leaks.
  5. Cybersecurity Training with Adaptive Scenarios: Using generative AI to build realistic, dynamic cybersecurity training simulations improves preparedness for responding to AI-enhanced threats. These adaptive scenarios help teams practice incident response under evolving conditions.
  6. Mitigate Specific AI Security Risks: Key AI-related risks include adversarial attacks, data poisoning, model inversion, privacy leakage, and misuse of generative AI capabilities (e.g., creating disinformation). Practices to mitigate these risks include applying robust data validation, encrypting sensitive data, controlling model access, and employing anomaly detection.
  7. Balancing AI Adoption and Risk: While generative AI improves security posture significantly, it also empowers attackers. Hence, strategies must balance harnessing AI’s benefits with strong controls to prevent misuse and rapidly respond to emerging threats.

Security teams need to shift from reactive audits to programmatic, proactive AI testing to address the challenges posed by LLMs. Half said they wanted more transparency from software suppliers about how they detect and prevent vulnerabilities.

In conclusion, effectively managing AI-related cybersecurity threats and securing LLMs involves an ongoing, multi-layered approach combining governance, continuous AI behavior monitoring, automated response, adversarial testing, tailored training, and targeted risk mitigation practices. Organizations should build adaptive, proactive security programs to handle the evolving AI threat landscape in 2025 and beyond.

  1. To ensure the secure implementation of generative AI in the business sector, it's crucial for companies to establish robust AI Governance and Risk Management Frameworks, outlining AI deployment responsibilities, compliance adherence, security protocols, and cross-functional oversight.
  2. In the context of increasingly complex AI systems, continuous monitoring and behavior analysis of AI-generated network traffic and models is essential to detect abnormal patterns and potential covert attacks early, using generative AI to enhance threat detection capabilities.
  3. By integrating generative AI in security operations, threat detection and incident response times can be significantly reduced, with automation facilitating faster responses, minimizing human error, and alleviating burnout.

Read also:

    Latest