Artificial Intelligence design guiding principles: Review of “Ministerial Statement on Trade and Digital Economy of G20”

(Image taken from Pixabay)
(Image taken from Pixabay)

The document “Ministerial Statement on Trade and Digital Economy of G20” was authored by the Group of 20 (G20) and was published in Japan in June 2019. It aims to align efforts, among state members and some guests, in defining a set of consistent guidelines with their principles for the use and development of Artificial Intelligence solutions as an opportunity to deepen their understanding of the interface between trade and the digital economy in building a sustainable and innovative global society; considering their national needs, priorities, and circumstances.

Being joint authored, by members of a body that groups the efforts of the governments of several countries: I have classified the author type as “Intergovernmental Organization”. Also, in the light of the objective pursued by the recommendations and the type of principles proposed, I have classified the document as “Policies Principles”. Both classifications will allow me to make future contrasts between documents and authors of the same type; enriching the analysis that I aim to present in this series of posts.

The group adopts the principles stated by OECD, which are annexed in its entirety statement without any change as can be contrasted by viewing the following, and comparing them with t.ly/e2Wq:

  1. Inclusive Growth, Sustainable Development, and Well-being: proactively engage in a responsible management of trustworthy artificial intelligence searching for benefits for people and the planet, increase human capacities and improve creativity, advance toward the inclusion of underrepresented populations by reducing economic, social, gender, and other inequalities; and in the protection of the natural environments,
  2. Human-Centred Values, and Fairness: respect the rule of law, human rights and democratic values during all the life cycle of the artificial intelligence solution –  freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, equity, social justice and work -; implementing mechanisms and safeguards, such as human capacity for self-determination, that are appropriate to context and consistent with the state of art,
  3. Transparency and Explainability: provide relevant information, appropriate to context and consistent with state of the art: (a) to  promote a general understanding of the operation of  AI systems,  (b) to enable stakeholders to be aware of their interactions with AI systems,  (c) to allow those affected by an artificial intelligence system to understand the outcome, and  (d) to allow those adversely affected by an AI system to challenge their outcome based on easy-to-understand information about the factors and logic that served as the basis for the prediction,  or  recommendation,
  4. Robustness, Security, and Safety: develop robust and  safe AI systems, and protect them  throughout their life cycle so that  –  under normal use, foreseeable use or misuse, or other adverse conditions  – the keep functioning properly without becoming a security risks; ensuring  traceability  towards  data sets, processes and decisions made, and  applying a systemic approach to risk management at every stage of the AI system lifecycle that includes factors such as:  privacy, digital security, security and bias; and,
  5. Accountability: make the AI actors accountable for the proper functioning of AI systems and its correspondence with the proposed principles, according to their roles, the context and consistent state of the art.

Similarly, they echo OECD’s cooperation policies in defining its national policies and international cooperation among the acceding countries in favor of reliable Artificial Intelligence:

  1. Investing in AI research and development,
  2. Fostering a digital ecosystem for AI,
  3. Shaping an enabling policy environment for AI,
  4. Building human capacity and preparing for labor market transformation, and
  5. International co-operation for trustworthy AI.

Consequently, I emphasize the analysis of the principles proposed by OECD and which can be read in t.ly/e2Wq.  and that I pointed as necessary intermediate layers for the adoption of these principles as a methodological reference in the design of artificial intelligence solutions.

After an analysis of the language used in the document, in which I used the NLTK library and the development environment for Python, extracting the 50 most frequent n-grams in statement turns out that:

  • The uni-grams with relative frequencies greater than 1.00 units described the objective intended with the principles and recommendation proposal, or the variables in which they are expressed: digital (2.99), ai (1.89), economy (1.58), trade (1.31), and development (1.19). While the uni-grams relative frequencies between .05 and 1.00 units represent the action environment: society (.76), international (.73), growth/ investment (.70), policy (.64), global/ sustainable/ use (.58), and innovation/ technologies (.55).
  • The bi-grams, from their part, delimited the document´s field of action in the context described by the unigrams, exhibiting with higher relative frequencies the terms: digital economy (1.40), trade investment (.37), trustworthy ai (.34), and human centered (.24). Although, other variables such as responsible stewardship and stewardship trustworthy exhibit minimum relative frequency values with 0.12 units each.
  • The tri-grams, similarly, exhibit a greater representation of the terms linked to the macro-objective of the document with terms like: trade and security digital economy (.24); while the pursued objectives are less represented: Sustainable development goals/ human-centred future society/ competitive non discriminatory/ countries underrepresented populations/ human centred future/ and centred future society/ con .06 unit each.

I would like to conclude by saying that the recommendations of the council on AI addressed in this post, along with other documents I am including in this series; constitutes an effort to solve some of the ethical problems rooted in the design and use of artificial intelligence solutions. In this case, specifically in the context of public policy; the remaining documents will include other scenarios. Also, I would like to add that, with this reading exercise I seek to draw attention to the opportunity of public policy designers and designers of artificial intelligence solutions to collaborate in the achievement of a common goal: what is the responsible design of artificial intelligence.

If you are interested about this topic and have any idea that complements this review of the Recommendations of the Council on Artificial Intelligence let me know with a comment.

Artificial Intelligence design guiding principles: Review of “Recommendation of the council on Artificial Intelligence”

(Image Taken from Pixabay)

The document ” Recommendations of the Council on Artificial Intelligence” was authored by the Organization for Economic Cooperation and Development (OECD), and was published in France in May 2019. It aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values; whose adherence would be ratified by the organization’s member countries, and some non-members such as: Argentina, Brazil, Colombia, Costa Rica, Malta, Peru, Romania, and Ukraine.

Being joint authored, by members of a body that groups the efforts of the governments of several countries in the Americas, Europe, the Middle East, and Australia: I have classified the author type as “Intergovernmental Organization”. Also, in the light of the objective pursued by the recommendations and the type of principles proposed, I have classified the document as “Policies Principles”. Both classifications will allow me to make future contrasts between documents and authors of the same type; enriching the analysis that I aim to present in this series of posts.

The Recommendation identifies five complementary values-based principles for the responsible

stewardship of trustworthy AI and calls on AI actors to promote and implement them:

  1. Inclusive Growth, Sustainable Development, and Well-being: proactively engage in responsible management of trustworthy artificial intelligence searching for benefits for people and the planet, increase human capacities and improve creativity, advance toward the inclusion of underrepresented populations by reducing economic, social, gender, and other inequalities; and in the protection of the natural environments,
  2. Human-Centred Values, and Fairness: respect the rule of law, human rights and democratic values during all the life cycle of the artificial intelligence solution –  freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, equity, social justice and work -; implementing mechanisms and safeguards, such as a human capacity for self-determination, that are appropriate to the context and consistent with the state of art,
  3. Transparency and Explainability: provide relevant information, appropriate to context and consistent with state of the art: (a) to  promote a general understanding of the operation of  AI systems,  (b) to enable stakeholders to be aware of their interactions with AI systems,  (c) to allow those affected by an artificial intelligence system to understand the outcome, and  (d) to allow those adversely affected by an AI system to challenge their outcome based on easy-to-understand information about the factors and logic that served as the basis for the prediction,  or  recommendation,
  4. Robustness, Security, and Safety: develop robust and  safe AI systems, and protect them  throughout their life cycle so that  –  under normal use, foreseeable use or misuse, or other adverse conditions  – the keep functioning properly without becoming a security risks; ensuring  traceability  towards  data sets, processes and decisions made, and  applying a systemic approach to risk management at every stage of the AI system lifecycle that includes factors such as:  privacy, digital security, security and bias; and,
  5. Accountability: make the AI actors accountable for the proper functioning of AI systems and its correspondence with the proposed principles, according to their roles, the context and consistent state of the art.

Additionally, a set of recommendations are made, as can be seen below, for the definition of national policies and international cooperation between the adherent countries in favors of trustworthy Artificial Intelligence:

  1. Investing in AI research and development,
  2. Fostering a digital ecosystem for AI,
  3. Shaping an enabling policy environment for AI,
  4. Building human capacity and preparing for labor market transformation, and
  5. International co-operation for trustworthy AI.

On this occasion will limit myself to only comment on the principles. Since the recommendations are aimed at public policymakers; and that is a context which I do not have enough experience in.

From my computer science background with hands-on experience in software project management, I find it difficult to adopt these principles as a methodological reference without them being subject to additional layers of interpretation, and integration into tools such as standards or checklists, to name a few examples. As I have already mentioned in other posts; on the one hand, standards would support the assurance of expected outcomes of the artificial intelligence solutions since early development stages in accordance with the framework scope delimited by the proposed principles; and, on the other hand, checklists are an effective tool in the verification stages, used to whereas the designed solution complied with the proposed principles – using the same examples -.

In that same line of thoughts, from my experience defining checklists, and as a member of international software development standards designing working groups, I can highlight the following elements:

  • The definition of the conceptual neighborhood for variables related with concepts such as: discrimination, bias, justice, and equity, in the context of artificial intelligence; that can serve as a frame of reference for the software developer at every stage – including maintenance – of the development process,
  • The operationalization of the concept of well-being as a dependent variable on the discriminatory or non-discriminatory nature of decisions based on decisions, predictions, and/or recommendations proposed by AI systems,
  • The operationalization of the concept of “natural environment friendly” as a dependent variable on the aggressive or non-aggressive nature of made decisions based on decisions, predictions, and/or recommendations proposed by AI systems,
  • The formalization of metrics aimed at evaluating how discriminatory or aggressive with the natural environment is a decision, prediction, and/or recommendation proposed by AI systems,
  • The definition of checklists to guide the developer of AI systems during the verification and measurement of these variables at every stage of the development process,
  • The demarcation of which values resulting from the measurements and what factors within the checklists triggers a formal review of the architecture baseline of the current version of the AI system being developed,
  • The demarcation of which values resulting from the measurements and of which factors within the checklists triggers a formal change request in the case of medium and large projects, with medium and high complexity,
  • The creation of a competent authority that continuously assesses the adequacy of the formalization of measurements to the corresponding social context, including that the causal elements of discrimination and other related terms are variable over time,
  • The operationalization of the variables on which human rights and the democratic values are based on which artificial intelligence solutions are expected to be in correspondence with,
  • The operationalization of “transparency” and “understanding” as dependent variables within the understanding of the methods used for data processing, which can be used in defining a metric that assesses the levels at which the understanding of the methods and the results of the AI system by potential stakeholders and auditors, can be expressed with, and
  • The definition of information management flows associated with the use of AI systems including the necessary elements (access policy to which piece of information, period of time the information will be available, for example) and moderating the communication between the stakeholder and the decision maker (regardless of the latter) to be incorporated – ex officio – in Report modules; helping those adversely affected by an AI system supported decision, to obtain relevant information and details of the decision.

As necessary intermediate layers towards the principle’s adoption as a methodological reference for the design of artificial intelligence solutions.

After an analysis of the language used in the document, in which I used the NLTK library and Python´s development environment for extracting the 50 most frequent n-grams from the charter´s body text it turned out that:

  • The uni-grams with relative frequencies greater than .50 units described the objective intended with the principles and recommendation proposal, or the variables in which they are expressed: policy (1.10), international (1.02), legal/ trustworthy/ development (.98), council/ principles/ work (.78), digital (.69), human (.65), operations/ stakeholders/ systems (.61), and responsible/ implementation/ systems (.57); in contrast with: stewardship/ rights/ inclusive/ sustainable/ recommendations (.33) that also being elements among the objectives pursued by the document are less represented along the text body.
  • The bi-grams, from their part, delimited the document´s field of action in the context described by the unigrams, exhibiting with higher relative frequencies the terms: trustworthy ai (.90), ai actors (.53), legal instruments/ international co-operation (.45), and responsible stewardship/ stewardship trustworthy (.33). Although, other variables such as risk management/ growth sustainable/ security safety/ digital ecosystem/ privacy data/ and ai government exhibit minimum relative frequency values with 0.16 units each.
  • The tri-grams, similarly, exhibit a greater representation of the terms linked to the macro-objective of the document with: international co-operation instruments (.45), responsible stewardship trustworthy/ ai systems lifecycle (.33); while the pursued objectives are less represented: human centred values/centred values fairness/ robustness security safety/ investing ai research/ fostering digital ecosystem/ building human capacity, preparing labor market/ practical guidance recommendations (.12) and, artificial intelligence first/ first intergovernmental standard (.08).

I would like to conclude by saying that the recommendations of the council on AI addressed in this post, along with other documents I am including in this series; constitute an effort to solve some of the ethical problems rooted in the design and use of artificial intelligence solutions. In this case, specifically in the context of public policy; the remaining documents will include other scenarios. Also, I would like to add that, with this reading exercise I seek to draw attention to the opportunity of public policy designers and designers of artificial intelligence solutions to collaborate in the achievement of a common goal: what is the responsible design of artificial intelligence.

If you are interested about this topic and have any idea that complements this review of the Recommendations of the Council on Artificial Intelligence let me know with a comment.

Artificial Intelligence design guiding principles: Review of “European Ethical Charter on the Use of AI in Judicial Systems and their environment”

The European ethics charter on the use of artificial intelligence in Judicial Systems and their environment was authored by the Council of Europe´s European Commission for the Efficiency of Justice CEPEJ and was published in France in December 2018. The charter aims to align regional efforts by defining a set of principles governing the design of artificial intelligence solutions, and their use in the context of the judicial system; based on the International Law of Human Rights.

As the charter was produced by joint authorship, gathering members of several government bodies from several European countries I have classified the author type as “Intergovernmental Organization”. Also, in the light of the objective pursued by the charter, and the nature of the principles it proposes, I have classified the document type as “Policies for Use”. Both classifications will allow future contrasts between documents and authors of the same type; enriching the analysis that I aim to present in this series of posts.

The principles proposed within the charter are listed below:

      1. Principle of respect for fundamental rights: ensure that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights,
      2. Principle of non-discrimination: specifically prevent the development or intensification of any discrimination between individuals or groups of individuals,
      3. Principle of quality and security: with regard to the processing of judicial decisions and data, use certified sources and intangible data with models elaborated in a multi-disciplinary manner, in a secure technological environment,
      4. Principle of transparency, impartiality, and fairness: make data processing methods accessible and understandable, authorize external audits, and
      5. Principle “under user control”: preclude a prescriptive approach and ensure that users are informed actors and in control of the choices made.

From my computer science background, I find it difficult to adopt these principles as a methodological reference without them being subject to additional layers of interpretation, and integration into tools such as standards or checklists, to name a few examples. On the one hand, standards would support the assurance of expected outcomes of the artificial intelligence solutions since early development stages in accordance with the framework scope delimited by the proposed principles; and, on the other hand, checklists are an effective tool in the verification stages, used to whereas the designed solution complied with the proposed principles – using the same examples -.

In that same line of thoughts, from my experience defining checklists, and as a member of international software development standards designing working groups, I can highlight the following elements:

      • The operationalization of the variables fundamental rights are based on, and artificial intelligence solutions are expected to comply with within the environment described by the charter being reviewed,
      • The definition of the environment framing the possible discriminations to which an individual or groups of individuals may be exposed to given the attributes included on each decision; consisting of a finite number given their typology according to the environment enclosed within the charter,
      • The definition of the current causal discrimination variables´ neighbor environments an individual or groups of individuals may be subject to according to the attributes included on each decision; given that discrimination is a variable phenomenon with – nonexclusive – temporal, geographical, and cultural dimensions.
      • The definition of variables that can be integrated into metrics to assess the different intensity levels in possibly discriminatory decisions which individuals or groups of individuals may be exposed to,
      • The creation of a certifying authority refereeing the adequacy of data sources used in support of the decision-making process when delivering justice,
      • The creation of a certifying authority evaluating the suitability of team members, and the completeness of the multidisciplinary team, designers of the models for the processing of data to be used in support of the decision-making process while delivering justice,
      • The operationalization of “accessibility” and “understanding” as dependent variables for understanding the methods used on data processing, which can be used in the definition of a metric that assesses the levels at which understanding of methods can be expressed, by potential stakeholders and auditors,
      • The creation of a competent auditing authority certifying the adherence of the design team – of artificial intelligence solutions for the use of the justice administrators within the environment delimited in the charter – with the proposed principles,
      • The definition of parameters that can be integrated into constraints within the artificial intelligence solution´s reasoning model, while in design stages, to avoid recommending decisions describing a prescriptive approach as per the context delimited by the charter, and
      • The definition of parameters that can be integrated into metrics evaluating the prescriptive nature of the approach described by the artificial intelligence solutions´ recommended decisions within the charter delimited environment.

As necessary intermediate layers towards the principle’s adoption as a methodological reference for the design of artificial intelligence solutions in the administration of justice.

After an analysis of the language used in the document, in which I used the NLTK library and Python´s development environment for extracting the 50 most frequent n-grams from the charter´s body text it turned out that:

      • The uni-grams with relative frequencies greater than .50 units described the environment delimited in the letter and not the objective intended with the principles proposal, or the variables in which they are expressed: Judicial (.87), Decisions (.82), Law (.72), Processing (.63), Legal (.62), Case (.56), Public/ Tools/ Judges/ Use (.53), and Justice (.52),
      • The bi-grams, however, begin to delimit the charter´s scope in the context described by the uni-grams, displaying, with higher relative frequencies the terms: Machine Learning (.31), Judicial Decisions (.28), Artificial Intelligence/ Open Data (.27), Judicial Systems (.20) and Personal Data (.19). Although, other variables like Data Protection and Fair Trial exhibit lower values, .6 and .5 units of relative frequency, respectively.
      • The tri-grams, on the other hand, connect both the environment and scope using the following text compositions: Protection Personal Data (.09), Artificial Intelligence Tools/ Processing Judicial Decisions/ Judicial Decisions Data (.06), and
      • It is interesting how, through the identified trigrams: Use Artificial Intelligence/ Intelligence Tool Services/ Predictive Justice Tools and Checklist Evaluating Processing/ Evaluating Processing Methods, all with a relative frequency of .05; the letter itself points to the need for tools like the ones mentioned earlier in this post.

I would like to conclude by saying that the charter addressed in this post along with other documents I will further include in this series constitutes an effort to solve some of the ethical problems rooted in the design and use of artificial intelligence solutions. In this case, specifically in the context of the administration of justice; the remaining documents will include other scenarios. Also, I would like to add that, with this reading exercise I seek to draw attention to the opportunity of public policy designers and designers of artificial intelligence solutions to collaborate in the achievement of a common goal: what is the responsible design of artificial intelligence.

If you are interested in this topic and have any idea that complements this review of the European Ethical Letter on the Use of Artificial Intelligence in Judicial Systems and its environment let me know with a comment.

Image is taken from pixabay.