The increasing perils of Artificial Intelligence
What is needed in addition to formal governance to address the risks of Artificial Intelligence while leveraging the full potential of data & insights for your organization? Erik Beulen researches.
Wrong decision making jeopardizes Artificial Intelligence value creation, even more concerning is the wrong use of Artificial Intelligence. Think Cambridge Analytica, which collected personal data of millions of Facebook users without their consent and was hired for Leave.EU and the UK Independence Party during 2016, or the IBM photo-scraping scandal. This 2019 controversy was centered around 1 million pictures of human faces from the online photo hosting site Flickr. IBM released the pictures, without consent, to enhance a face recognition Artificial Intelligence-based algorithm. Or the Dutch government, which deployed artificial intelligence to handle childcare benefits applications, that disproportionately denied benefits to and charged with fraud ethnic minorities. As a consequence, the Dutch cabinet resigned in January 2021. Organizations need to rethink the use of Artificial Intelligence, as well as their collaboration with external parties in this area.
Furthermore the growth of Artificial Intelligence continues to increase and more specifically by engaging with external service providers. KPMG’s (2018) market research shows that the 2025 market size of Artificial Intelligence will grow to $232 billion compared to an estimated $12.4 billion today. McKinsey’s survey report (2020) on Artificial Intelligence shows that Artificial Intelligence adoption is highest in product or service development and service operations, where Gartner indicates the most important trend for 2020 as ‘smarter, faster, more responsible Artificial Intelligence’ and that ‘by the end of 2024, 75% of enterprises will shift from piloting to operationalizing Artificial Intelligence, driving a 5X increase in streaming data and analytics infrastructures’.
The examples above illustrate the potential of Artificial Intelligence and increases the need to find answers for addressing the risks of Artificial Intelligence, e.g. privacy and competition risks and breaches of anti-trust laws as well as dealing with ethical issues. What do organizations, when engaging with external service providers, need in addition to formal governance, to leverage the full potential of data & insights? Or should organizations fix issues prior to expanding in Artificial Intelligence and become fully data-driven?
Formal governance is the starting point
Contracts and service level agreements are the most important elements in formal governance. In my research there are two key formal governance observations: intellectual property, and experience-driven service levels instead of metric based service levels. To ensure proper control it is essential that the ownership of the organization specific Artificial Intelligence algorithms remains within the organization, where the none-specific Artificial Intelligence can be transferred to the service providers to enable innovation and safeguard the service provider’s 1:N business model without jeopardizing the strategic and business interests of their clients. This split enables co-creating innovation and results in joint intellectual property.
The experience-driven service levels, to decrease transaction costs and at the same time limit ex post vendor opportunism and create incentives for mutual cooperative behavior, based on the Artificial Intelligence co-development process, clients and service providers mutually increase the level of trust by applying experience-driven service levels. Therefore experience-driven service levels also foster relational governance.
Relational governance to supplement formal governance
Well-governed client-service provider outsourcing relationships positively affect the degree of trust between parties in creating Artificial Intelligence solutions. Creating solidarity that encourages a bilateral approach to joint innovation through mutual adjustment is essential. Furthermore mutual commitment forms are a prerequisite in developing Artificial Intelligence. Proactive communication of service providers also supports a trustworthy relationship with their clients and reduces the information asymmetry, which is key in innovative technologies, such as Artificial Intelligence.
Physiological contract to grow the attention for ethics
The physiological contract is an essential element of the relational governance, as this further reduces the information symmetry, as well as balancing the client interest with the commercial interest of service providers. Furthermore, the physiological contract invites organizations and their service providers to address ethical topics. In practical terms this means avoiding causing physical or emotional harm to participants, as well as making participants aware of any potential harms prior to their participation. Also your own employees as well as service providers need to remain neutral and unbiased, no personal preconceptions or opinions should interfere with the data collection process. This will highly contribute to responsible Artificial Intelligence, now and in the future. However securing this cannot be achieved overnight.
To pause or not to pause Artificial Intelligence – that’s the question
To be clear – pausing is not a feasible option, innovations need to be fostered but managed properly. This is why, in addition to the formal governance, attention to the relational governance and the physical contract is of the utmost importance. Organizations need to focus on building trust on ethical principles and implementing explainable and transparent Artificial Intelligence algorithms. Also governmental organizations need to step up. As an example, the upcoming European Union legislation Digital Service Act Package will be instrumental, as it provides the infrastructure for building Artificial Intelligence systems. Only these measures will smooth out the Artificial Intelligence folds and enable organizations to leverage the full potential of data & insights and avoid unfair competition, while protecting the interests of all of us. Artificial Intelligence should not only be lawful, but also responsible and purposeful.
Download scientific paper
Are you a Digital Transformation Manager and are you looking for scientifically grounded tools to help you design a responsible, purposeful and legitimate artificial intelligence governance model? Download my scientific research on governance for artificial intelligence outsourcing.
Download scientific paper