Search Options
Skip to main content (Press Enter).
Sign In
Skip auxiliary navigation (Press Enter).
Skip main navigation (Press Enter).
Toggle navigation
Search Options
Communities
General Discussion
My Communities
Explore All Communities
Products
Solutions
Services
Developers
Champions Corner
Customer Stories
Insights
Become an Advocate
Badge Challenges
Resources
Resource Library
Hitachi University
Product Documentation
Product Downloads
Partners Portal
How To
Get Started
Earn Points and Badges
FAQs
Start a Discussion
Champions Corner
Blog Viewer
Blogs
Building Trust with DataOps and XAI – Explainable AI
By
Hubert Yoshida
posted
08-21-2019 19:25
0
Like
In my
last blog post on AI
, I wrote about the meticulous care that our data scientist, Mike Foley, puts into the development of an AI model. From the exploratory data analysis for variable selection and feature engineering, through the iterative training, testing, and validation
of models to see if they have learned enough to predict accurately, to running through at least three predictive models to find the technique with the highest predictive accuracy, and the preservation of the data by-products of all these activities.
These activities are certainly done to produce an accurate outcome that is free of bias or error, but it is also done so that the results are explainable to human beings. This is the concept of XAI or explainable AI. As regulators, official bodies and general users come to depend on AI-based dynamic systems, clearer accountability will be required for decision making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline, the
International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI).
This is all about building trust in AI.
The outcomes of AI and Machine learning can be influenced by the biases of the data scientist or by the training data which could lead to unjust or unintended results. For instance, these biases could be prejudicial to the decision in criminal sentencing based on racial or other biases. There is also the possibility that AI and ML can learn to cheat. Wikipedia describes how “AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data, but do not reflect the complicated implicit desires of the human system designers. For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures, rather than learning how to tell if a horse was actually pictured.”
AI and ML will be increasingly regulated by the
Right to Explanation
,
which is the right to be given an explanation for decisions that significantly affect an individual’s legal, financial or societal rights. For example, a person who applies for a loan and is denied may ask for an explanation, which could be credit bureau reports showing he declared bankruptcy last year and this is the main factor in considering the likelihood of future defaults, and thus the bank will not give him the loan he applied for.
The
right to explanation
is not always mandated by law. However, there are several examples where this right is being applied. In the United States, Credit Scores have a well-established right to explanation under the
Equal Credit Opportunity Act
, and insurance companies are also required to be able to explain their rate and coverage decisions. If these decisions on credit scores or rate decisions were developed by AI or ML algorithms, these companies would have to explain how these algorithms worked. In Europe, the European Union introduced a
right to explanation
in
General Data Protection Right (GDPR)
as an attempt to deal with the potential problems stemming from the
rising importance of algorithms.
Recital 71 of GDPR
states: “The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.
Hitachi Data Scientists are mindful of the right to explanation and use DataOps methodologies to develop trusted outputs. The
Hitachi Content Platform (HCP)
is the best DataOps platform for storing this data for the use of Right to Explanation for AI or Explainable AI (XAI). HCP has governance controls which include, encryption, multitenancy, and hashing for immutability. Custom meta data tagging and built in query capability is available for additional governance and transparency. These features are important as users are beginning to question the ability to trust AI and AI developers are being challenged on the validity of their methodologies. AI can no longer be a “black box” since it has the ability to make decisions that affect our lives.
There will be increasing “
Right to Explanation
” regulations challenging decisions that are made through AI and ML. The ability to respond to these regulations will depend on the care with which data scientists develop and deploy AI and ML. It also depends on the preservation of the data by-products, that are generated in the exploratory data analysis, training, testing, and validation steps. Get ahead of the XAI regulations with Hitachi Vantara’s DataOps Advantage.
#Hu'sPlace
#Blog
0 comments
0 views
Related Content
GDPR, the Courts and Explainable AI
Hubert Yoshida
Added 12-20-2019
Blog Entry
Hitachi’s Principles for the Ethical Use of AI in its Social Innovation Business
Hubert Yoshida
Added 03-23-2021
Blog Entry
AI and Solomon's Code
Hubert Yoshida
Added 05-19-2019
Blog Entry
AI and ML Require Changes in Storage Infrastructure
Hubert Yoshida
Added 01-28-2020
Blog Entry
What’s Next Session Presented At NEXT 2019
Hubert Yoshida
Added 10-28-2019
Blog Entry
Permalink
© Hitachi Vantara LLC 2023. All Rights Reserved.
Powered by Higher Logic