The document “Ministerial Statement on Trade and Digital Economy of G20” was authored by the Group of 20 (G20) and was published in Japan in June 2019. It aims to align efforts, among state members and some guests, in defining a set of consistent guidelines with their principles for the use and development of Artificial Intelligence solutions as an opportunity to deepen their understanding of the interface between trade and the digital economy in building a sustainable and innovative global society; considering their national needs, priorities, and circumstances.
Being joint authored, by members of a body that groups the efforts of the governments of several countries: I have classified the author type as “Intergovernmental Organization”. Also, in the light of the objective pursued by the recommendations and the type of principles proposed, I have classified the document as “Policies Principles”. Both classifications will allow me to make future contrasts between documents and authors of the same type; enriching the analysis that I aim to present in this series of posts.
The group adopts the principles stated by OECD, which are annexed in its entirety statement without any change as can be contrasted by viewing the following, and comparing them with t.ly/e2Wq:
- Inclusive Growth, Sustainable Development, and Well-being: proactively engage in a responsible management of trustworthy artificial intelligence searching for benefits for people and the planet, increase human capacities and improve creativity, advance toward the inclusion of underrepresented populations by reducing economic, social, gender, and other inequalities; and in the protection of the natural environments,
- Human-Centred Values, and Fairness: respect the rule of law, human rights and democratic values during all the life cycle of the artificial intelligence solution – freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, equity, social justice and work -; implementing mechanisms and safeguards, such as human capacity for self-determination, that are appropriate to context and consistent with the state of art,
- Transparency and Explainability: provide relevant information, appropriate to context and consistent with state of the art: (a) to promote a general understanding of the operation of AI systems, (b) to enable stakeholders to be aware of their interactions with AI systems, (c) to allow those affected by an artificial intelligence system to understand the outcome, and (d) to allow those adversely affected by an AI system to challenge their outcome based on easy-to-understand information about the factors and logic that served as the basis for the prediction, or recommendation,
- Robustness, Security, and Safety: develop robust and safe AI systems, and protect them throughout their life cycle so that – under normal use, foreseeable use or misuse, or other adverse conditions – the keep functioning properly without becoming a security risks; ensuring traceability towards data sets, processes and decisions made, and applying a systemic approach to risk management at every stage of the AI system lifecycle that includes factors such as: privacy, digital security, security and bias; and,
- Accountability: make the AI actors accountable for the proper functioning of AI systems and its correspondence with the proposed principles, according to their roles, the context and consistent state of the art.
Similarly, they echo OECD’s cooperation policies in defining its national policies and international cooperation among the acceding countries in favor of reliable Artificial Intelligence:
- Investing in AI research and development,
- Fostering a digital ecosystem for AI,
- Shaping an enabling policy environment for AI,
- Building human capacity and preparing for labor market transformation, and
- International co-operation for trustworthy AI.
Consequently, I emphasize the analysis of the principles proposed by OECD and which can be read in t.ly/e2Wq. and that I pointed as necessary intermediate layers for the adoption of these principles as a methodological reference in the design of artificial intelligence solutions.
After an analysis of the language used in the document, in which I used the NLTK library and the development environment for Python, extracting the 50 most frequent n-grams in statement turns out that:
- The uni-grams with relative frequencies greater than 1.00 units described the objective intended with the principles and recommendation proposal, or the variables in which they are expressed: digital (2.99), ai (1.89), economy (1.58), trade (1.31), and development (1.19). While the uni-grams relative frequencies between .05 and 1.00 units represent the action environment: society (.76), international (.73), growth/ investment (.70), policy (.64), global/ sustainable/ use (.58), and innovation/ technologies (.55).
- The bi-grams, from their part, delimited the document´s field of action in the context described by the unigrams, exhibiting with higher relative frequencies the terms: digital economy (1.40), trade investment (.37), trustworthy ai (.34), and human centered (.24). Although, other variables such as responsible stewardship and stewardship trustworthy exhibit minimum relative frequency values with 0.12 units each.
- The tri-grams, similarly, exhibit a greater representation of the terms linked to the macro-objective of the document with terms like: trade and security digital economy (.24); while the pursued objectives are less represented: Sustainable development goals/ human-centred future society/ competitive non discriminatory/ countries underrepresented populations/ human centred future/ and centred future society/ con .06 unit each.
I would like to conclude by saying that the recommendations of the council on AI addressed in this post, along with other documents I am including in this series; constitutes an effort to solve some of the ethical problems rooted in the design and use of artificial intelligence solutions. In this case, specifically in the context of public policy; the remaining documents will include other scenarios. Also, I would like to add that, with this reading exercise I seek to draw attention to the opportunity of public policy designers and designers of artificial intelligence solutions to collaborate in the achievement of a common goal: what is the responsible design of artificial intelligence.
If you are interested about this topic and have any idea that complements this review of the Recommendations of the Council on Artificial Intelligence let me know with a comment.