Alle IT-kennis onder één wereldwijd dak
Werken bij de beste IT dienstverlener van Nederland?
Resultaat door passie voor IT
Start typing keywords to search the site. Press enter to submit.
Generative AI
Cloud
Testing
Artificial intelligence
Security
September 10, 2024
Op de QX Day 1 oktober komen onder andere artificial intelligence (AI) en data aan bod. Deepa Mamtani, Almira Pillay en Tia Nicolic schreven een (Engelstalige) blog over wat ervoor nodig is om AI-systemen effectief en veilig in te zetten en welk framework hiervoor is ontwikkeld.
With the advent of big data and artificial intelligence (AI), data science solutions are being increasingly used to support decision making. With the promise of insights, knowledge and efficiency, AI solutions are bound to be implemented across a multitude of processes and applications. However, to encourage AI adoption and use, there needs to be an inherent trust in the solution implemented. As humans, our trust in technological solutions is based on our assessment of the quality and reliability of the solution. We need assurance that the AI system is fair, secure and explainable; thus, all stakeholders need to be considered throughout the entire AI model development process. In order to build a high performing, trustworthy solution, we need to adhere to an AI Quality Framework (AIQF) that outlines core principles and ensures the highest accuracy throughout the entire AI project lifecycle. As Sogeti is a leader in Quality Assurance and Testing, the Testing & AI team have developed an AIQF that guides the model development process, ensuring the solution is fool proof from start to end. The AIQF has three main focuses that needs to be achieved: Fairness, Transparency and Accountability.
Mitigating and detecting intentional and unintentional biases is pivotal in ensuring trustworthiness. Right from the business understanding to the data preparation phase of the model development process, we need to understand, test and avoid bias. There are several statistical tests that should be implemented to detect this. Additionally, regular stakeholder reviews should be conducted to respond to fairness-related issues quickly.
An audit of the AI algorithm should be implemented to explain how the model came to a certain prediction or recommendation. Not only should the entire data collection and transformation phases be transparent, the actual model should be too. This means we need to try and shed some light on the ‘black box’ to make it more reliable for decision making. We can do this with Explainable AI (XAI) models. XAI is a set of statistical models that act as an audit layer to explain and justify the outcome of the AI algorithm. This layer is imperative in ensuring transparency and building confidence in the solution.
To achieve accountability of the model and AI lifecycle process, compliance checks should be conducted to ensure regulations like GDPR are met. When evaluating and deploying the AI solution, the way in which the end user interprets the prediction should be shown and documented. Furthermore, a holistic review of the entire lifecycle should be done to ensure that best practices for design and development were adhered to, all stakeholders were considered, and ethical and legal issues were addressed. The AIQF acts as a necessary set of guidelines and principles to ensure to the utmost ability that all steps and checks were taken to create the AI system from a quality and fairness perspective. As the AI model development process is an iterative one, so is the AIQF. In turn, this encourages trustworthiness and guarantees quality. To learn more about the AIQF or find out how you can implement or develop a custom framework alongside Sogeti, contact the Testing^AI team.
Kom naar de QX Day op 1 oktober voor de volledige presentatie ‘Revolutionising Testing with the power of AI’ van Deepa Mamtani, Almira Pillay en Tia Nicolic in de stream AI & Data.
Aanmelden QX Day