- ISACA NL Journal ·
Five questions for Risk Event speaker Jair Santanna on the Risks of AI Language Models in Cybersecurity
- Webteam - Susan Schaeffer|
- 21 oktober 2024|
- 0 berichten|
At our Risk Event on the 6th of November 2024 Jair Santanna will delve into the Risks of AI Language Models in Cybersecurity. Risk Event Committee member Fook Hwa Tan Risk decided to ask Jair some questions to get a sneak peek on his presentation.
- Nieuws ·
Call for speakers events
- Webteam - Susan Schaeffer|
- 25 september 2024|
- 0 berichten|
Increase your visibility throughout the ISACA international community and working field by becoming an ISACA NL Chapter speaker.
- ISACA NL Journal ·
How companies can deal with the increase of EU Tech regulations
- Webteam - Susan Schaeffer|
- 24 september 2024|
- 0 berichten|
Yuri Bobbert - The number of enterprises subject to regulatory requirements has increased significantly under existing and new legislation such as the GDPR, NIS1 and NIS2 and DORA. In this article the author discusses how companies can deal with the increase of EU Tech regulations.
- Nieuws ·
Wie wint de Joop Bautz Information Security Award in 2024?
- Webmaster|
- 16 september 2024|
- 0 berichten|
De genomineerden voor de Joop Bautz Information Security Award 2024 zijn bekend. In alfabetische volgorde: De winnaar wordt bekend gemaakt tijdens het Security-Congres ’24 op 9 oktober in Gooiland te…
- Nieuws ·
Ontvang korting op een cursus van Security Academy
- Webteam - Susan Schaeffer|
- 9 augustus 2024|
- 0 berichten|
Wist je dat ISACA-leden 10% korting ontvangen op het totale cursusportfolio van Security Academy? Lees hier hoe je deze korting verzilvert.
Did you know that ISACA members receive a 10% discount on the entire Security Academy course portfolio? Check how to redeem this discount.
- ISACA NL Journal ·
Balancing Privacy and Security: Navigating the Future of Federated Learning and AI
- Webteam - Susan Schaeffer|
- 7 augustus 2024|
- 0 berichten|
By Armin Shokri Kalisa and Robbert Schravendijk - Artificial intelligence is infiltrating more and more applications. This raises privacy concerns regarding the vast amounts of data required to train these AI models. One of the proposed solutions is a framework called Federated Learning. This framework does, however, not guarantee security from attacks. This article covers how attackers can use backdoor attacks to poison the model resulting from Federated learning and what steps can be taken to make it more robust against these attacks.