Mona de Boer, partner Data & Technology at PwC, will be a speaker at our Risk Event on the 16th of November 2023. The event focusses on the implications of artificial intelligence (AI) on our work and the steps we need to take te ensure digital trust. Risk Event Comittee member Fook Hwa Tan decided to ask Mona some questions to get a sneak peak on Mona’s presentation.
Fook Hwa: ‘What are some of the most significant risks you’ve identified, when it comes to the adoption and implementation of artificial intelligence in various industries?’
Mona: ‘When using AI in different industries, we obviously face some big opportunities, but also some big risks. These risks include biases and unfairness in AI systems and the way they are applied, worries about the protection of personal and business data, a lack of understanding how AI systems work leading to the inability to execute effective human oversight and also ethical dilemmas in how AI systems are used towards natural persons.’
‘And how do you address concerns about potential biases and ethical implications in AI systems, especially in high-stakes applications like healthcare or autonomous vehicles?’ Fook Hwa wants to know.
Mona: ‘There is no quick fix for these complex concerns. Dealing with these worries, especially in safety-critical areas like healthcare or self-driving cars, involves a combination of steps. We need, for example, to make sure the data used to train AI is diverse and represents different groups to reduce bias, encourage organizations to explain how their AI systems work so it’s clear how they make decisions, but also diligently monitor AI system performance and impact to improve them and minimize bias and ethical issues as they arise.’
Fook Hwa: ‘As AI technologies continue to advance, what measures do you suggest organizations and policymakers take to ensure responsible AI development and deployment?’
Mona: ‘The dynamics and complexity of responsible AI development and use require a combination of measures and will be in itself a moving target in the next years. For now, I see the most important short term actions for organizations and policymakers in educating people in AI and its risks, ensuring AI systems are designed and developed in multicompetence and diverse teams to avoid overseeing the ‘unknown unknowns’, and monitoring AI system behavior continuously (both pre- and post-implementation) to be able to intervene in time when AI systems produce unintended or undesirable outcomes.’
Smart machines capable of performing tasks that typically require humans are increasingly in demand and continue to change the future of virtually every field of technology. ‘What role do you see AI researchers, developers, and practitioners playing in mitigating risks, and how can they collaborate effectively to address these challenges?’ asks Fook Hwa.
‘Organizations are currently challenged in their daily practice to move fast, yet not ‘break things’, says Mona. ‘At the same time, the domain of AI application as well as its responsible use are both nascent and require heavy knowledge development to guide these developments in a societally acceptable way. Academia and practice are a power couple if you ask me. Academia contributes to knowledge development on AI risk and risk mitigating measures, fact-driven analysis of risk interventions by organizations and the impact thereof, and developing a long-term perspective on how to address the complex challenges and concerns surrounding AI. Practice presents the real-life issues, uncertainties and dilemmas that organizations face, especially the ones we can’t foresee with the best of intentions and requires a certain level of ‘rationality’ to work out in daily life. I believe the domain of AI risk mitigation can’t evolve without the collaboration and complementarity between AI researchers, developers, and practitioners.
Fook Hwa: ‘With the rapid pace of AI development, how can we ensure that AI systems remain accountable and transparent to build and maintain public trust?’
Mona: ‘Again, educate the public around AI risks and support them to develop an ‘intuition’ for these risks, require organizations to be transparent about their AI systems in an accessible way, so that people are not misled and can judge for themselves in their (in)direct interactions with these systems, and protect the public by having subject matter experts (auditors, regulators, consumer protection organizations etc) scrutinize AI systems against clear rules and laws that aim to protect the rights and interests of the public.’
About
Mona de Boer published the dissertation ‘Trustworthy AI and accountability: yes, but how?’. This dissertation focuses on the gap between the high-level objectives and requirements of the EU AI Act and the day-to-day practice of Trustworthy AI and makes recommendations to address the current lack of methodology between them. Mona is one of the speakers at the Risk Event 2023 on the 16th of November 2023.
Fook Hwa Tan is Chief Quality Officer at Northwave. He is also an active ISACA Volunteer. Fook Hwa is a trainer for our ISACA NL ISO27001 training course. He is also an ISACA author and long time Risk Event Committee member.
Visit the Risk Event 2023
Curious how articifial intelligence will influence your work in the near future and which steps you need to take to ensure digital trust? Register now!