Sanoma’s Ethical AI Principles
Sanoma has defined ethical principles for the use of artificial intelligence to ensure the responsible use of artificial intelligence and minimise risks associated with it. By following the principles, we aim to monitor the safe, appropriate and responsible use of artificial intelligence.
1. Fairness with Aim for Positive Impact: The use of AI in our products aims to reflect the values we operate on such as Freedom of Speech and Creating a Positive Learning Impact. AI should be used in a fair manner, considering values such as human rights, privacy, and non-discrimination.
2. Accountability by humans: People are always responsible for the decisions made by AI solutions that we use. Our teams are engaged throughout the entire lifecycle of algorithms: in the planning, development and maintenance of our own AI models and algorithms.
3. Explainability: We aim to use AI of which reasoning can be understood by the people who are accountable for it, and we ensure that we can explain the functionality of such AI system’s sufficiently.
4. Transparency: We communicate transparently about our use of AI and how it impacts the end users of our products.
5. Risk and Impact Assessment: We assess the planned and potential impacts of our technology to individuals and society at large. AI Assessments are integrated into our product development process considering privacy and security by design. We implement appropriate measures to ensure accuracy, robustness, and security of our AI solutions to mitigate identified risks.
6. Oversight: We commit to regular monitoring of how we fulfil these principles in our AI operations. As the development of AI is a fast-evolving topic, we will evaluate and update these principles periodically to ensure they reflect lessons learned from our experience.