Beste bezoeker, u bezoekt onze website met Internet Explorer. Deze browser wordt niet meer actief ondersteund door Microsoft en kan voor veiligheids- en weergave problemen zorgen. Voor uw veiligheid raden wij u aan om een courante browser te gebruiken, zoals Google Chrome of Microsoft Edge.
Search
Close this search box.

The AI Frikandel Conundrum

Artificial Intelligence (AI) is a field of work that has seen immense breakthroughs in the last few years, given its application in the tangible world. AI Technology already indicates a dramatic shift in human resource management and technology dependency. Notwithstanding the advantages, the implementation of this technology without a mature and informed strategy may not be as beneficial as imagined and, in some cases, counterproductive. What does this have to do with the Netherlands’ favourite deep-fried snack? This article will share insights on the risks of rushed AI adoption and provide recommendations to assist in the governance of this emerging technology.

 

By Daniela de Almeida Lourenço, MSc, CISM, CISSP, C|CISO

Information is one of the most valuable assets for an individual, an organisation, or a nation-state. Throughout the ages, information was a currency for power and considerably more challenging to acquire than now. The development of new means of transportation and the invention of the press were leaps in the transfer of information. However, the highest peak in the increase of the availability of information took place with the democratization of the internet. In 2010, the volume of data/information created, captured, copied, and consumed worldwide was estimated at two zettabytes. In 2020, this number exceeded 60 zettabytes, it is estimated that this will increase threefold by 2025[i]. Not only the amount of data has expanded, the access to the internet as well: the United Nations declared in 2011 that it was a “key means by which individuals can exercise their right to freedom of opinion and expression, as guaranteed by Article 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights”[ii], and has reached in 2023 an estimated 90% of the global population[iii].

The overwhelming exposure to data beyond our capacity to process it humanely, or even beyond the necessary computing power until a short time ago, created the need for technology that could simultaneously process this amount of data and convert it to information. The advances in the efforts towards the creation of artificial intelligence enable this ability in several ways, the most obvious machine learning and the nuances of artificial intelligence per se.

According to the European Union (EU), the Netherlands is the European country surveyed, and the percentage of the population with basic digital skills was the highest (83%)[iv], above the EU target for 2030 and considerably higher than in other countries. This digital literacy should go hand in hand with the early adoption of edge technologies, in which artificial intelligence systems are certainly included.

The frikandel

On a seemingly unrelated subject, it is socially agreeable to state that the frikandel, a deep-fried, often garnished sausage, is one of the Netherlands’ most consumed and appreciated snacks. According to the Algemene Kokswaren en Snackproducenten Vereniging (AKSV, the General Association of Manufacturers of Cooking Supplies and Snacks in the Netherlands), the Netherlands consumes 600 million frikandellen annually, humbly followed by the consumption of the kroketten (300 million annually) . This snack is present in most snack bars or fast-food establishments throughout the country. Frikandel-eating contests also take place at different scales, and they are commonly mentioned in tourist guides or forums . This sausage is made of mechanically separated meat (chicken, pork, beef, sometimes horse), bread, emulsifiers, flavor enhancers, among others  , and used to include, until the 80s and according to different sources, “chicken skins, salivary glands and the gastrointestinal tracts of cows and pigs” , or even cow udders. Although no recent studies were found about the literacy of the consumers with regards to the ingredients of the said snack, the consumption was affected after the publication of the study in 1978 but not for long, considering the consumption numbers presented above.

Nowadays, the frikandel producers use different quantities and chemicals in the snack composition, albeit it is not common to observe a consumer at a fast-food stand asking the establishment for the list of ingredients of something they are about to eat, even after the action group Foodwatch filed a lawsuit in 2022 against the Dutch state with the claim that it does not adequately monitor and control the safety in the production of mechanically separated meat .

It can hence be acceptable to claim that the consumers appreciate the end result, even though they are (willingly) oblivious to what happens in the background.

What does a frikandel have to do with AI?

A very interesting study in 2023[v] concluded, after a survey, that respondents’ tendency to approve the usage of AI is much higher than their (self-assessed) knowledge. In other words, the respondents stated they accept the usage of AI even though they are not very knowledgeable about it. Other studies corroborate this finding. Early 2022, Ipsos published a study around awareness and expectations of AI[vi]. If the results of this study are cross matched with the EU literacy study[vii], we are provided with an interesting picture; it reflects a trend of reverse proportionality between digital savviness and awareness of the usage of AI (in other words, the more digitalized the users are, the more they admit they do not know which systems are powered by AI, figure 1). This trend is the most obvious in the case of the Dutch. The same happens if the Eurostat survey is compared with the Ipsos question about trust (figure 2):
Figure 1
Figure 2

The Ipsos study also reveals that the respondents who said Yes to “know[ing] which types of products and services use AI” was, for the average of the countries presented above, 43%. The average of respondents who indicated that they think that “Products and services using artificial intelligence make [their] life easier” is 52%. This indicates that some respondents are optimistic about the usefulness of AI, although they cannot identify where and when it is being used.

The UK office conducted one more interesting study for National Statistics[viii] in June 2023. Their Opinions and Lifestyle Survey showed the following: “around one in five (19%) adults said they could explain what AI is in detail, and over half (53%) could provide a partial explanation; one in six (17%) adults reported that they can often or always recognise when they are using AI; (…) given awareness of AI use was self-reported, it is possible that adults’ perceived ability to recognize when they are engaging with AI differs from their actual ability to do so.”[ix]

 

So, the AI <> Frikandel conundrum

AI, like the frikandel, gets to consumers as an easy to use and promising product. The awareness of the models, the datasets themselves and the background construction to achieve the final system or service are not highly relevant and do not outweigh the perceived uses of the technology. Transparency and explainability, which should be foundational criteria to follow when building data models (or when selecting food), are deprioritized in favor of time to market and usability. As the saying goes, “ignorance is bliss”.

More studies are necessary to determine public awareness of risks of AI, including those in organizations. According to an ISACA survey[x], 40% of the organizations do not offer AI training at all, only 17% have a formal and comprehensive policy governing the use of AI technology. Yet73% of the respondents perceive that their employees are using some type of AI. This immaturity in the governance of new technology deems the risks to be exponential and soon challenging to manage and treat.

 

Threats feed on oblivious users

The lack of awareness regarding any subject enables potential misuse of tools, services, or goods. AI systems are no exception, and a lack of awareness of AI can be considered an accidental threat against its users. GenAI hallucinations are a good example of this. According to IBM, these can be defined as “a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate”[xi]. Not knowing that a GenAI system, for instance, in the form of a chatbot, may return incorrect data, is a substantial risk, primarily if that output is used for purposes such as decision-making or young education.

Misinformation can, in this case, be accidental. However, AI is increasingly being utilized not only for misinformation but also for intentional disinformation, as per the incidents that the media have recently reported. There is an increasing risk of AI disinformation affecting even geopolitics, as per a World Economic Forum’s report: “organized campaigns spreading misinformation through social media or other channels can influence public opinion, cast doubt on election integrity and sway election outcomes”[xii].

In terms of threats against AI, NIST groups four main types of attacks[xiii]: evasion, a technique exploited after the system is released to production and is fed adversarial examples; poisoning, by introducing corrupted data during the model training stage; privacy, where filters are overrun or the model is reverse engineered; and abuse, where the actual data that is accessible by the model is tampered with. These attacks add up to the already existing cyberattack techniques. However, the former is much more accessible to perform and with much less IT knowledge. Moreover, the obscure nature of the AI systems may cause damage to confidentiality, integrity, and availability long before it is detected, as the output is not sanitised. This can ultimately cause critical decisions to be made based on inaccurate and even nefarious information.

To illustrate this, the following are hypothetical scenarios of this risk:

  • In anecdotal cases, it could cause a child to fail a school project after using incorrect information about the first image of an exoplanet, as the child blindly followed the information provided by the chatbot[xiv]
  • With a higher impact, it could affect the performance of automated processes such as credit risk evaluation
  • If this risk is extrapolated, it could jeopardize safety if the immature technology is weaponized, i.e. the automation of missile deployments in conjunction with fallible AI-powered technologies that are already being used in war scenarios such as the Israel/Palestinian conflict[xv]
  • Another seriously critical example would be the corruption of a model utilized in the automation of medicine production at scale.

The following test was performed on the 25th of January 2024. Gemini was asked about an individual. Its privacy guardrails were triggered but then easily contoured with semantics, as per Figure 3. Elaborating on the same request, the bot began to hallucinate and to provide incorrect information about the individual.

Figure 3

While these are prompted attacks against AI systems, the mere lack of controls over the AI systems lifecycle causes unintentional effects, such as biased results based on race, gender or religion. For the short period that AI and LLMs have been available to the public, the media have made sufficient reports on the nefarious use of AI without proper sanitization. In particular, the biased algorithm that culminated in the so-called benefit scandal in the Netherlands[xvi], the weaponisation of the GenAI itself[xvii] or even the harm to people based on an incorrect facial recognition mechanism[xviii].

 

Sustainable and adaptive strategy

Whilst the threats are evolving faster than any possible prevention, it is essential to apply foundational principles that can help the public and organisations to defend against the AI attacks, given that the technology is here to stay:

 

1. Adaptive Governance and risk management

ISACA defines Governance as “The method by which an enterprise evaluates stakeholder needs, conditions and options to determine balanced, agreed-upon enterprise objectives to be achieved. It involves setting direction through prioritization, decision making and monitoring performance and compliance against the agreed-upon direction and objectives”[xix]. Governance sets the tone based on priorities, guardrails, opportunities and constraints. One of these guardrails is the legal and regulatory context, changes to which can bring protection against some threats, but can also result in two challenges: a painful transformation of the organization to comply with the successive changes, if the organization is not responsive and agile enough to steer along; or the illusion of security, that is, the assumption that the requirements imposed by the regulatory bodies will resolve all the risks.

Although the AI Act is upcoming in the European Union, the legislation will still need time to be ratified and enforced – and it is mainly a reactive mechanism rather than preventive, with regards to the already existing threats. It is hence relevant for organisations to take proactive and adaptive measures in setting up a strategy to manage this technology, and creating the necessary policies and controls against the threats they are specifically exposed to.

The concept of Adaptive Governance (AG) emerged mostly associated with socio-ecological structures or systems, but it can also be applied to organizations. Unlike traditional Governance with rigid structures and long-term plans, an AG strategy prioritizes flexibility, continuous learning and resilience. In practicality, this would result in replacing five-year Tech plans with two-year plans, or in the emphasis on the prevention of technical debt by evaluating the lifecycle of the technology being acquired, or in the increase in frequency of the risk management activities.

One of the first exercises to be prioritized by risk management is to understand how the risk appetite aligns with the eventual usage and adoption of AI systems, and how the organisation intends to manage it to lower the risk to an acceptable level – bearing in mind that the risk will be a moving target as much as threats evolve, requiring the strategy to adapt to the changing conditions and regulatory setting.

 

2. Ethical ownership, responsibility, accountability

Also inherent to governance, ethical ownership and a responsibility and accountability model need to be established. Whilst there are no requirements for the industry, it is vital to protect both the information we own and the data entrusted to us as stewards. A user, either a natural person or an organization, should be made responsible for: its own use of the AI systems; and for requiring the producers of those AI systems to be ultimately responsible (and liable) for the data they use and how they are instructed their system to use that data to create information. That responsibility takes the form of having sound and sustainable controls in place that enforce transparency, explainability and auditability, trustworthiness, resilience, and respect for basic human rights; and establishing a very direct relationship between the AI system provider and the user. This responsibility and accountability model should also clarify which activities are to be taken by each party throughout the system lifecycle. Adopting a Testing, Evaluation, Verification, and Validation (TEVV) model, such as the one suggested by NIST in their AI Risk Management Framework (RMF)[xx], is a great starting point.

 

3. Awareness and training

Awareness must be structurally promoted as AI literacy in the broadest sense. This includes disseminating reliable information about the responsible and safe use of AI systems, inherent risks, and ethical concerns, as well as the reiteration of the responsibility model and the mechanisms to ensure the quality of the output provided by those systems. This needs to be the role of institutional organizations, private organisations developing or facilitating these systems, and even of cybersecurity practitioners. The approach should be effectively targeted at different audiences, for instance, inclusion of awareness about misinformation and disinformation in the education curriculum for children and teenagers, who are potentially using GenAI to complete their school assignments; individuals with financial-related roles should be informed about the AI-powered attacks including phishing campaigns and deepfakes; individuals that work with sensitive data should receive information about how the information can be poisoned, or guardrails evaded, affecting confidentiality and integrity. These are three examples of many areas to be prioritised concerning awareness, which should ensure that no audience is excluded from proportionate training and education; AI literacy may become a determinant factor for economic and societal development and respective (in)equalities soon.

Why these measures are recommended and applicable to all organizations:

  1. they may help you prepare for upcoming legal and regulatory impositions;
  2. they may reward savings on investment by avoiding further remediation costs in the future and by increasing confidence;
  3. they may result in a competitive advantage for being adaptive and resilient to changes, including the changes to the regulatory requirements;
  4. they enable sustainability, which brings resilience, whilst doing the “right thing” for your users or customers.

Ultimately, they provide you with the necessary knowledge to allow you to make an informed decision. Even if that means you will still eat that “frikandel speciaal” while having a night out like there was no tomorrow, you have the right of choice based on the necessary information!

[i] https://www.statista.com/statistics/871513/worldwide-data-created/, last accessed 27/05/2024

 

[ii] United Nations Human Rights Council, Promotion and protection of all human rights, civil, political, economic, social and cultural rights, including the right to development, Seventeenth session, Agenda item 3, following the Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, accessible through https://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf

 

[iii] https://www.statista.com/statistics/273018/number-of-internet-users-worldwide/, last accessed 27/05/2024

 

[iv] Eurostat (2023), Skills for the digital age, https://ec.europa.eu/eurostat/statistics-explained/index.php?title=Skills_for_the_digital_age#Measuring_digital_skills_in_the_EU, last accessed 15/5/2024

 

[v] Scantamburlo, T. et al. “Artificial Intelligence across Europe: A Study on Awareness, Attitude and Trust.” ArXiv abs/2308.09979 (2023): n. pag., last accessed May 2024

 

[vi] Ipsos (2022), Global Opinions And Expectations About Artificial Intelligence, https://www.ipsos.com/sites/default/files/ct/news/documents/2022-01/Global-opinions-and-expectations-about-AI-2022.pdf, last accessed May 2024

 

[vii] Eurostat, idem

 

[viii] UK Office for National Statistics (ONS), released 16 June 2023, ONS website, article, Understanding AI uptake and sentiment among people and businesses in the UK, last accessed May 2024

 

[ix] UK Office for National Statistics (2023),  https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/publicawarenessopinionsandexpectationsaboutartificialintelligence/julytooctober2023, last accessed May 2024

 

[x] ISACA (2024), The AI Reality: IT Pros Weigh in On Knowledge Gaps, Policies, Jobs Outlook and More, https://www.isaca.org/-/media/files/isacadp/project/isaca/resources/infographics/2024-european-ai-infographic-524.pdf, last accessed May 2024

 

[xi] IBM (sd), What are AI hallucinations?, https://www.ibm.com/topics/ai-hallucinations last accessed May 2024

 

[xii] World Economic Forum (2024), Global Cybersecurity Outlook 2024 Insight Report, in collaboration with Accenture, https://www3.weforum.org/docs/WEF_Global_Cybersecurity_Outlook_2024.pdf last accessed May 2024

 

[xiii] NIST (2024), NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems, https://www.nist.gov/news-events/news/2024/01/nist-identifies-types-cyberattacks-manipulate-behavior-ai-systems, & NIST AI 100-2 E2023 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, https://csrc.nist.gov/pubs/ai/100/2/e2023/final last accessed May 2024

 

[xiv] The Verge (2023), Google’s AI chatbot Bard makes factual error in first demo, https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo last accessed May 2024

 

[xv] Times of Israel (2024), Report: IDF using facial recognition tools to identify, detain suspects in Gaza, https://www.timesofisrael.com/report-idf-using-facial-recognition-tools-to-identify-detain-suspects-in-gaza/

 

[xvi] Amnesty International (2021), Dutch childcare benefit scandal an urgent wake-up call to ban racist algorithms, https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/, last accessed May 2024

 

[xvii] Venture Beat (2024), With little urging, Grok will detail how to make bombs, concoct drugs (and much, much worse), https://venturebeat.com/ai/with-little-urging-grok-will-detail-how-to-make-bombs-concoct-drugs-and-much-much-worse/

 

[xviii] NBC News (2024), Man says AI and facial recognition software falsely ID’d him for robbing Sunglass Hut and he was jailed and assaulted, https://www.nbcnews.com/news/us-news/man-says-ai-facial-recognition-software-falsely-idd-robbing-sunglass-h-rcna135627 last accessed May 2024

 

[xix] ISACA (sd), Glossary, https://www.isaca.org/resources/glossary#glossg, last accessed June 2024

 

[xx] NIST (2024), AI Risk Management Framework and Resource Center, https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF, last accessed May 2024

Picture of Daniela de Almeida Lourenço, MSc, CISM, CISSP, C|CISO

Daniela de Almeida Lourenço, MSc, CISM, CISSP, C|CISO

Daniela de Almeida Lourenço is a Netherlands-based information security practitioner who bridges the gap between tech and cultural savvy. With international experience and with a unique academic background - a Master's in Communication and Cultural Studies and an Executive Master's in Cybersecurity -, she understands the human element of information security in Tech and in organisations’ own cultural nuances. Daniela also holds top cybersecurity certifications (CISM, CISSP, C|CISO) and leverages this competence as Chief Information Security Officer for a financial services organization. She advocates for crafting an attitude of security awareness, ensuring everyone plays a sustainable role in protecting information. She often contributes to the industry with thought-provoking keynotes, webinars and insightful panel discussions.

Gerelateerde berichten

  • Nieuws ·

Complete our IT Governance Survey

ISACA Netherlands Chapter and ISACA Belgium Chapter invite you to complete our IT Governance Survey. Last year ISACA Netherlands Chapter and ISACA Belgium Chapter joined forces and started an IT…

Plaats een reactie

Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.

We gebruiken functionele en analytische cookies om ervoor te zorgen dat de website optimaal presteert. Als u doorgaat met het gebruik van deze site, gaan we ervan uit dat u hiermee akkoord gaat. Meer informatie vindt u in onze Privacyverklaring.