Beste bezoeker, u bezoekt onze website met Internet Explorer. Deze browser wordt niet meer actief ondersteund door Microsoft en kan voor veiligheids- en weergave problemen zorgen. Voor uw veiligheid raden wij u aan om een courante browser te gebruiken, zoals Google Chrome of Microsoft Edge.
Search
Close this search box.
  • Nieuws
  • Five questions for Risk Event speaker Marc van Meel on the ethical implications of artifical intelligence

Five questions for Risk Event speaker Marc van Meel on the ethical implications of artifical intelligence

The quick rise of artificial intelligence (AI) has created many opportunities. But the fast changes also create ethical concerns. At our Risk Event on the 16th of November 2023 Marc van Meel, Managing Consultant at KPMG and AI Ethicist, will delve into the ethical and political implications of technology on our society. Risk Event Committee member Fook Hwa Tan Risk decided to ask Marc some questions to get a sneak peak on his presentation.

Fook Hwa: ‘What are the main risks of AI solutions?’

Marc: ‘Most AI-related risks are not groundbreaking; instead they are new variations of existing concerns. Risk factors such as discrimination, privacy breaches and brand reputation damage have been long-standing concerns. However, AI introduces novel ways to manifest and exacerbate these risks, but at a scale larger than we have ever seen before and in real-time! One of the primary causes is underspecification, namely that AI solutions, due to their complexity and the dynamic nature of data, can produce unforeseen and unintended outcomes that traditional IT systems may not. The real challenge lies in effectively managing these emerging risks and devising robust strategies to mitigate them. This is what KPMG’s Responsible AI proposition is all about. In this digital era, organizations also face a realm of new digital risks. However, it is an era of risk mitigation that, with the right assistance, lies well within an organization’s grasp to master.’

‘How do you ensure that AI solutions are ethical?’ Fook Hwa wants to know.

Marc: ‘The field of ethics is also not new, but its application within the context of AI can vary significantly between countries, sectors and organizations. Just like the risk appetite for innovations like Generative AI can vary among organizations. It’s crucial to acknowledge that there is no one-size-fits-all solution for ensuring ethical AI. We typically start helping organizations by first establishing their own specific principles and values, which serve as a foundation for ethical and responsible development and application of AI solutions. Furthermore, we help them create clear policies, guidelines and governance structures that align with relevant laws and their unique needs and risk appetite.’

Fook Hwa: ‘What are the key challenges in managing risk in AI solutions?’

Marc: ‘The key challenge in managing risks of AI solutions is not to overly focus solely on the risks associated with AI solutions themselves. Instead, it’s essential to recognize that an algorithm is just one part of an IT system, which, in turn, is a component within a broader decision-making chain. This decision-making process often involves interactions with other humans and/or systems. To adequately identify and manage risks of AI solutions, it’s imperative to consider the entire ecosystem in which the AI solution operates. It is for this reason that solutions like the Algorithm Register offer only limited utility when it comes to managing risks. Only by taking a holistic approach to risk management, encompassing the complete decision-making context, can we effectively prevent and mitigate the risks associated with AI solutions.’

Fook Hwa: ‘What are the key steps organizations can take to build trust in data science, despite the uncertainties associated with AI and machine learning (ML), and how can they improve the reliability of their results?’

Marc: ‘First, it’s important to acknowledge that data science, despite its name, doesn’t always align with traditional scientific methods. We resort to AI and ML when problems are exceptionally complex or not solvable through conventional means. Because of this, the application of AI inherently comes with a degree of uncertainty and generalization. To build trust, organizations should prioritize open communication about the application of AI and invest in raising awareness. For example, it’s crucial to educate end users within the organization that behind a simple “yes” or “no” output lies a probability distribution. Providing the necessary context for understanding results will help enable end users to correctly frame and interpret results, enabling more informed decision-making. By addressing the nuanced nature of data science and by empowering users to make well-informed decisions, organizations can foster trust and improve the reliability of their data-driven practices.’

Fook Hwa: ‘How can you ensure that an organization has confidence in the AI solutions it uses?’

Marc: ‘I wholeheartedly believe that confidence in your organization’s AI solutions is best achieved through a collaborative effort with a trusted third party. At KPMG, our Responsible AI team has extensive experience in securing trust in AI solutions, for example by auditing AI solutions, fostering cultural alignment, and developing robust (Generative AI) policies and strategies. Third party endorsement assures organizations that their AI solutions have undergone rigorous scrutiny, align with their objectives, and adhere to relevant standards and (upcoming) compliance regulations. Or we help identify what is needed to do so. Furthermore, guiding cultural transformation and crafting an AI strategy ensures that the development and deployment of AI solutions harmonizes with the organization’s core values, corporate strategy and risk appetite. This not only enhances the efficiency of AI solutions, but also cultivates an environment that fosters trust in said technologies and where potential risks are identified proactively.

Visit the Risk Event 2023

Curious how articifial intelligence will influence your work in the near future and which steps you need to take to ensure digital trust? Register now!

About

Picture of Marc van Meel

Marc van Meel

Marc van Meel is Managing Consultant Responsible AI at KPMG, is an experienced Data Scientist and an authority in the field of Digital Ethics. He assists organizations in maximizing the benefits of AI, while effectively managing risks. In addition, he is a sought-after guest lecturer, podcast host, and speaker at conferences.

Picture of Fook Hwa Tan

Fook Hwa Tan

Fook Hwa Tan is Chief Quality Officer at Northwave. He is also an active ISACA Volunteer. Fook Hwa is a trainer for our ISACA NL ISO27001 training course. He is also an ISACA author and long time Risk Event Committee member.

Gerelateerde berichten

  • ISACA NL Journal ·

From Excel to Excellence: Revitalizing IT Risk Strategies for a Future-Ready Landscape

By Dave van Stein & Yianna Paris - Effective IT risk management is necessary to safeguard valuable assets, achieve organizational objectives, and ensure long-term success. When done properly, it is a crucial tool for informed decision-making. However, keeping up has become challenging in the modern fast changing world of Agile, cloud infrastructure, the massive use of external dependencies and complex and opaque supply chains, and daily changing threats.

Plaats een reactie

Deze site gebruikt Akismet om spam te verminderen. Bekijk hoe je reactie-gegevens worden verwerkt.

We gebruiken functionele en analytische cookies om ervoor te zorgen dat de website optimaal presteert. Als u doorgaat met het gebruik van deze site, gaan we ervan uit dat u hiermee akkoord gaat. Meer informatie vindt u in onze Privacyverklaring.