Beste bezoeker, u bezoekt onze website met Internet Explorer. Deze browser wordt niet meer actief ondersteund door Microsoft en kan voor veiligheids- en weergave problemen zorgen. Voor uw veiligheid raden wij u aan om een courante browser te gebruiken, zoals Google Chrome of Microsoft Edge.
Zoeken
Sluit dit zoekvak.

Five questions for Risk Event speaker Jair Santanna on the Risks of AI Language Models in Cybersecurity

At our Risk Event on the 6th of November 2024 Jair Santanna will delve into the Risks of AI Language Models in Cybersecurity. Risk Event Committee member Fook Hwa Tan Risk decided to ask Jair some questions to get a sneak peek on his presentation.

Fook Hwa: ‘Your upcoming presentation at the ISACA Risk Event is titled “The Double-Edged Sword: Risks of AI Language Models in Cybersecurity.” What inspired you to focus on this topic, and why is it critical for cybersecurity professionals to understand these risks now?’

Jair: ‘I tailored this presentation specifically for the ISACA Risk Event audience, focusing on the critical balance between the risks and opportunities of new technologies like AI generative models. These technologies, while offering immense potential, also introduce significant risks that cybersecurity professionals must understand and address. My goal is to emphasize that we cannot allow these risks to overshadow the transformative opportunities AI brings to the field. Instead, we must learn to manage them effectively to unlock AI’s full potential in enhancing cybersecurity.’

Fook Hwa: ‘AI tools like GPT-4 have the potential to enhance cybersecurity defenses, but you highlight how they can also be exploited by attackers. Can you share a real-world example where AI was weaponized in a cyberattack, and how should organizations prepare for such threats?’

Jair: ‘I will be covering several examples in my presentation, both theoretical and actual cyberattacks. Among the most common are phishing and business email compromise (BEC), which attackers can enhance using AI. A particularly interesting real-world case occurred in 2019, when a CEO fraud attack used deepfake audio (vishing) to impersonate a CEO and trick a U.K.-based energy company into transferring $243,000 to cybercriminals. To prevent such incidents, organizations should focus on employee training, raising awareness about AI-enabled threats like deepfakes, and ensuring strict verification procedures are followed before authorizing financial transactions or sensitive actions.’

Fook Hwa: ‘Ethical concerns, such as bias and privacy breaches, are often mentioned in the context of AI. In your experience, how can organizations balance leveraging AI’s capabilities while ensuring ethical responsibility and data protection?’

Jair: ‘Organizations can balance leveraging AI’s capabilities with ethical responsibility and data protection by implementing a few key strategies. First, they should prioritize transparent AI development, ensuring that systems are explainable and that decision-making processes can be audited to identify and address potential biases. Second, they need to enforce data minimization and anonymization practices to protect privacy, ensuring that only the necessary data is collected and stored securely. Third, incorporating fairness assessments and diverse datasets into AI training can help reduce bias. Finally, establishing ethical oversight committees and adhering to regulations like GDPR and EU AI Act ensures that AI innovations are deployed responsibly while maintaining trust and protecting users’ rights.’

Fook Hwa: ‘Your presentation also touches on the operational risks of over-reliance on AI in cybersecurity. What practical steps can businesses take to prevent becoming too dependent on AI, and how can they complement AI solutions with human expertise?’

Jair: ‘To avoid over-reliance on AI in cybersecurity, businesses should adopt a human-in-the-loop approach, ensuring that human expertise remains central to decision-making processes. While AI can efficiently handle repetitive tasks and detect anomalies at scale, humans are essential for interpreting nuanced threats, verifying critical alerts, and making judgment calls in ambiguous situations. Practical steps include providing continuous training for cybersecurity teams to keep up with AI advancements and establishing protocols where humans validate high-risk decisions flagged by AI systems. Additionally, organizations should foster a culture of collaborative cybersecurity, where human analysts regularly review AI-driven insights, adjust algorithms, and maintain oversight to ensure the system remains effective and unbiased. By combining AI’s speed and automation with human intuition and contextual understanding, businesses can build a more resilient and adaptive cybersecurity posture.’

Fook Hwa: ‘You wear multiple hats as a principal researcher at Northwave Cybersecurity and an assistant professor at the University of Twente. How do your roles in academia and industry influence your approach to addressing AI risks, and what unique insights can attendees expect from your presentation at the ISACA Risk Event?’

Jair: ‘My dual roles as a principal researcher at Northwave Cybersecurity and an assistant professor at the University of Twente provide a unique perspective on AI risks in cybersecurity. From academia, I approach these challenges with a research-driven mindset, focusing on theoretical advancements, long-term impacts, and exploring emerging risks that may not yet be fully understood in the industry. On the other hand, my industry role grounds me in practical, real-world challenges, where the urgency of mitigating AI risks and protecting against sophisticated attacks takes center stage. This combination allows me to address AI risks holistically, balancing cutting-edge research with actionable strategies.

At the ISACA Risk Event, attendees can expect insights that blend academic rigor with industry pragmatism. I’ll provide a forward-looking view on AI risks, while also offering concrete, practical steps for organizations to mitigate these threats today. This approach ensures that businesses can leverage AI effectively while staying ahead of the rapidly evolving threat landscape.’

'The Double-Edged Sword: Risks of AI Language Models in Cybersecurity' - by Jair Santanna

Presentation summary – Imagine receiving a phishing email so impeccably crafted by an AI language model that it bypasses advanced filters and deceives even cybersecurity experts. While AI tools like GPT-4 enhance our defenses, they also equip attackers with sophisticated means to exploit vulnerabilities. This presentation delves into risks such as adversarial attacks, data poisoning, and model inversion. We explore ethical concerns like bias and privacy breaches, operational challenges from over-reliance on AI, and the weaponization of these models by malicious actors. Attendees will gain insights on balancing AI’s transformative benefits with the imperative of security and ethical responsibility.

Visit our Risk Event 2024

Looking forward to Jair’s presentation and those of our other Risk Event speakers? Register now!

About

Foto van Jair Santanna

Jair Santanna

Dr. Jair Santanna is an enthusiastic and passionate principal researcher (@Northwave Cyber Security) and an assistant professor (@University of Twente). He is a practical, data-driven and extremely curious person. He loves to spread the knowledge with the scientific community and with cybersecurity practitioners. He prepares his presentations thinking about you (the audience). Therefore, he promises to give an engaging, enthusiastic, and to-the-point presentation.

Foto van Fook Hwa Tan

Fook Hwa Tan

Fook Hwa Tan is Chief Quality Officer at Northwave. He is also an active ISACA Volunteer. Fook Hwa is a trainer for our ISACA NL training courses. He is also an ISACA author and long time Risk Event Committee member.

Gerelateerde berichten

  • Nieuws ·

ISACA NL Chapter zoekt nieuwe leden voor de kascommissie

ISACA NL Chapter zoekt twee nieuwe leden voor de kascommissie. De kascommissie controleert de financiële administratie ISACA NL en doet daarvan verslag aan het bestuur en de Algemene Ledenvergadering (ALV). Leden van de kascommissie moeten voldoende kennis van de materie hebben om de financiële situatie te kunnen beoordelen. De kascommissie wordt in de ALV benoemd voor een periode van 2 jaar, met een verlenging van 2 jaar. Als lid van de kascommissie ontvang je 20 CPE punten voor je inzet.
  • Nieuws ·

Call for speakers events

Increase your visibility throughout the ISACA international community and working field by becoming an ISACA NL Chapter speaker.

Plaats een reactie

Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.

We gebruiken functionele en analytische cookies om ervoor te zorgen dat de website optimaal presteert. Als u doorgaat met het gebruik van deze site, gaan we ervan uit dat u hiermee akkoord gaat. Meer informatie vindt u in onze Privacyverklaring.