Contact us today.Phone: +1 888 282 0696Email: sales@aurorait.com

Implementing Responsible AI: Ensuring Ethical and Effective Use of Technology

Almost every other product intended for human consumption comes with a disclaimer or a Statutory Warning. Cigarette packs warn users that smoking can be injurious to health. Packaged foods come with a warning as to their expiry dates, and the use of allergens, preservatives, and ingredients that can pose a health hazard. Even driving mirrors in cars advise caution, lest the view that they provide results in miscalculations that trigger accidents. Across products and industries the message is clear – exercise responsibility, consume responsibly!

With the use of Artificial Intelligence (AI) continuing to gain momentum, and the dangers inherent in its use becoming increasingly clear, one would expect to see an industry-wide framework for its safe adoption. But that however is not the case, causing many tech giants to bemoan the relative lack of legislation for its setup and usage.

The need

AI’s status as one of the most transformative technologies in the world is almost unchallenged. It is the one trend that is most likely to impact business growth in the coming decades. Its market share is today in the region of USD 515 billion, a figure that is expected to increase five-fold in less than a decade at a CAGR of 20.4%. Almost 50% of all organizations worldwide are believed to use it, and it is projected to increase global GDP by a whopping $15.7 trillion by 2030.

But with the benefits come risks, that compel discretion, care, and caution when setting it up, and using it.

Why, is easy to see.

AI is driven by algorithms that process humungous amounts of data that reside in repositories called Large Learning Modules (LLMs). While they do produce exceptional results, algorithms are also known to malfunction. Amongst the main concerns associated with AI are:

  • Possibility of bias creeping into results and recommendations
  • Discriminatory outcomes
  • Data privacy concerns due to the large amount of sensitive personal data processed
  • Potential misuse that can compromise peace
  • Lack of explainability, transparency, and accountability in the process
  • Possibility of inconsistency in results
  • Ethical, legal, and moral concerns

In an attempt to address these concerns, regulatory bodies, and AI experts are advocating a structured approach to adopting and using it. For a few years now, Responsible AI has been a buzzword in cybersecurity circles.

What it is

A good definition of Responsible AI comes from Forbes (1) who defines it as the design, development, and deployment of autonomous processes and systems that result in standards and protocols characterized by their ethics, efficacy, and trustworthiness. Responsible AI takes a big-picture view of the operations in an organization, and how they align from the design stage itself with established societal norms, ethics, and moral and legal requirements. Gartner (2) goes further, listing the various values that businesses must pursue like trust, fairness, privacy, transparency, bias mitigation, etc through the conscious adoption and operation of AI practices that uphold and are seen to uphold ethical, moral, legal, and regulatory values.

Ensuring that AI systems attain these values would necessitate that they are built on strong principles that stand the test of audit. It would mean looking closely at the algorithms to ensure that they are:

  • Fair in terms of their outcomes rather than being discriminatory, biased, and unjust
  • Explainable in terms of the process used to arrive at the results/recommendations. More often than not, users are unable to explain how results were obtained, a situation which would make a decision based on such an outcome, unjustified
  • Sustainable in the computing process associated with the generation of results

Building Responsible AI

Responsible AI systems became a watchword for organizations that are looking to achieve further recognition in terms of goodwill, investor confidence, and the sustainable values that they stand for. With AI supporting both core and ancillary functions in an organization, becoming a Responsible AI-driven company is now challenging C-Suite and board members, as they endeavor to navigate the risks inherent in its design and deployment process.

Here we present some of the considerations that will need to be looked at:

  • Drafting of a sound, fair, and practical strategy for the adoption of AI through a collaborative process involving all stakeholders
  • Adopting an excellence-driven approach involving astute planning and coordinated implementation of the AI process
  • Cross-functional collaboration between various departments to ensure diverse risk perspectives are taken care of, and unforeseen circumstances and vulnerabilities are minimized
  • Adoption of best-in-class performance metrics that will throw up inconsistencies and unacceptable behaviors during audit and assessments
  • Clear accountability and responsibility matrix with checks and counterchecks

NIST (4) lists seven key principles that support the development of Responsible AI:

  • Accountability and transparency to enhance trust
  • Explainability and interpretability of process and output
  • Fair and unbiased outcomes
  • Privacy-enhanced data management with emphasis on anonymity and confidentiality
  • Resilience to identify, negotiate, withstand, and recover from cybersecurity attacks
  • Reliability in terms of output
  • Safety so as to prevent misuse that can endanger human life and property

Forbes (5) suggests the adoption of a risk-based approach, in keeping with risks inherent in AI. Amongst the main risks that would need to be addressed are:

  • Privacy risks associated with possible misuse of personal data and violation of privacy rights
  • Ethical risks associated with unfair, biased, and discriminatory outcomes
  • Compliance risks due to AI systems not being aligned with regulations prevailing
  • Transparency risks due to obscurity in the process
  • Operational and business continuity risks as a consequence of system failure

Conclusion

With good governance and ethical practices being key organizational drivers, it is not surprising to see many value-driven organizations adopting Responsible AI practices. Some like Google, IBM, and Microsoft have instituted their own list of principles. NIST’s AI Risk Management Framework is drawn up on similar lines.

Such is the ongoing challenge involved, however, that both Google and Microsoft have also been at the receiving end for racial, ethnic, and gender biases showing up in their AI face detection services. Facebook ran into trouble over a racially-suggestive message that caused them to review and disable an AI-powered feature based on a flawed dataset used to train algorithms.

But there is a light at the end of the tunnel. Considering the awareness and the desire on the part of organizations to implement it, and the imminent legislation (6) that will mandate it, it is certain that we will see more organizations carefully navigating the risk landscape and judiciously implementing Responsible AI in the coming years.

Statutory warning, or not.

References:


Contact us at sales@aurorait.com or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts