Search Options
Skip to main content (Press Enter).
Sign In
Skip auxiliary navigation (Press Enter).
Skip main navigation (Press Enter).
Toggle navigation
Search Options
Communities
General Discussion
My Communities
Explore All Communities
Products
Solutions
Services
Developers
Champions Corner
Customer Stories
Insights
Customer Advocacy Program
Badge Challenges
Resources
Resource Library
Hitachi University
Product Documentation
Product Downloads
Partners Portal
How To
Get Started
Earn Points and Badges
FAQs
Start a Discussion
Champions Corner
Blog Viewer
Blogs
Hitachi’s Principles for the Ethical Use of AI in its Social Innovation Business
By
Hubert Yoshida
posted
03-23-2021 19:08
2
Like
For more than 100 years, Hitachi has focused on developing the social infrastructure that provides the foundation for nearly every aspect of modern life. Our founding spirit is built on three words that guide the way we work.
Wa (Harmony):
We respect the opinions of others and discuss matters frankly, but also fairly and impartially.
Makoto (Sincerity):
We approach issues openly, honestly and respectfully, in the spirit of true teamwork.
Kaitakusha-seishin (Pioneering Spirit):
We strive to lead in our areas of expertise, promoting the limitless potential of individuals while pursuing new challenges and higher goals.
These values form the foundation for our decisions and link our group companies around the world together. We foster a diverse, rewarding culture that drives innovation and inspires others to do the same.
These guiding principles are as relevant today as they were over 100 years ago even as new technologies are developed. Hitachi has innovated in many different ways in the two areas of operational technology (OT) and information technology (IT) with the goal of supporting society and creating a society that lives in safety and comfort. Over the past 20 years, AI as a source of innovation is increasing and is being applied in a broad spectrum of areas. However, there is a danger in the use of AI in an unethical way. For instance, AI could be used to profile individuals so that they are discriminated against. The late Stephen Hawkins even warned that
"The development of full artificial intelligence could spell the end of the human race."
Understanding the possible use of AI in an unethical manner, Hitachi has been supporting risk evaluation and control measures in various data utilization projects and is working on projects utilizing data in consideration of the need for privacy protection. Since 1984, Hitachi’s Professional Engineers Association is one of the largest societies of in-company professional engineers that works to improve ethical awareness of engineers inside and outside of the company.
Last month, by utilizing the experiences and knowhow mentioned above, leading Hitachi data scientists and AI researchers took the initiative to formulate the
Principles guiding the ethical use of AI
. Hitachi’s Lumada Data Science Lab. Which is the center of numerous advanced Collaborative Creation projects in the Social Innovation Business, plays a central role in specific operations that have now begun, including the review of the purposes for utilizing AI, the evaluation of risks involved in the societal implementation of AI and the development of control measures, using a checklist developed for the ethical use of AI. In addition, it actively applies
Explainable AI (XAI),
a technology for explaining AI functionality, as part of its response measures.
Hitachi has established a standard of conduct required in each of the three stage of planning, societal implementation, and maintenance and management, and seven items to be observed which apply to all stages and will utilize AI based on these guiding principles.
Standards of conduct
1. Development and use of AI will be planned for the realization of a sustainable society
It is important to ensure that the reason for using AI in services, solutions or products is appropriate from the planning stage, in order to suppress the inherent ethical risks in AI while generating new value. Hitachi will use AI to resolve issues in society, realize a comfortable, resilient and sustainable society and to improve the quality of life of people around the world.
2. AI will be societally implemented in society with a human-centric perspective
To ensure that decisions made by AI respects the rights of individuals and contributes to the interest of society, it is important that AI is societally implemented in a responsible manner and ensure its harmonious co-existence with humans. Hitachi will societally implement AI from a human-centric perspective according to the principles of freedom, fairness and equity, and endeavor to verify that it functions as intended.
3. AI will be maintained and managed to provide long-term value
It is important that the AI continues to consistently provide value over the long term after it is societal implemented. Hitachi will endeavor to maintain and manage the value provided by the AI in a way it is responsive to and acceptable to societal and environmental changes.
Items to be addressed
1. Safety:
AI may potentially cause irreparable harm to humans and society if they are misused or poorly designed. The development of AI systems must always be responsible and developed toward optimal sustainability for public benefit. Safety must be a priority in the design and implementation of AI systems. AI Ethics should strive to avoid individual and societal harms caused by the misuse, abuse, poor design, or unintended negative consequences of AI systems.
2. Privacy:
Massive amounts of personal data are collected, processed, and utilized to develop AI technologies. Often times, big data is captured and extracted without gaining the proper consent of the data owner and compromising the privacy of the individual. AI Ethics must not infringe upon the privacy of individuals. Video can be particularly invasive, so Hitachi has invested in anonymization of video through pixilation and LIDR
3. Fairness, Equality, and Prevention of discrimination:
Machine learning algorithms apply weights and biases against certain features to spot patterns and make predictions. However that can introduce unfair discrimination depending on the data and algorithms used. If the data that is used to predict future job success is based on historical data that is predominately based on white males, the results are bound to be different from an analysis based on a broader pool of workers. The Data Ops that goes into Ethical AI must ensure the fairness of the algorithms and data selection.
4. Proper and responsible development and use:
The development and use of AI must be done responsibly, using the highest standards of scientific excellence. Just like the response to the COVID pandemic, it should be based on the science and not personal biases. Ethical AI should be rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration
5. Transparency, Explainability and Accountability:
To create fair and accountable AI we need precise regulation and better methods to certify, explain, and audit inscrutable systems. Hitachi’s Central Research Lab is developing XA, which refers to methods and techniques in the application of AI technology such that the results of the solution can be understood by human experts such as regulators, official bodies and general users who come to depend on AI-based dynamic systems and require clearer accountability to ensure trust and transparency in the results. An example is co-creation partnership with Sumishin Net Bank and Dayta Consulting to provide a loan screening service using XAI, to consider social values in the loan approval process and making it understandable to human loan officers. This enables more loans to be approved for greater opportunities. XAI provides another layer of ethical trust over the AI loan approval process. Hitachi’s co-collaboration with customers in the creation process also ensures transparency and accountability
6. Security:
AI and ML require more data, and more complex data, than other technologies and most of that data and processing is done in the cloud. This presents a host of security vulnerabilities and issues with AI. That means that all systems, including new AI-powered projects, are built around core data security principles, including encryption, logging, monitoring, authentication and access controls. Digital trust must be innate to an AI platform.
7. Compliance:
Hitachi will realize AI and its operations that comply with the applicable laws and regulations of the countries and regions in which the AI is to be used. One of the lesser known areas covered by GDPR is the “Right to Explanation”. GDPR Articles
13-15
and
21-22
specifies that when automated data processing and decision making is applied to a person’s private data he has a right to receive an explanation of how the decision was made. Automated data processing and decision systems typically use machine learning or AI. The intent is to force AI to explain its decision so that citizens can evaluate and correct any wrong or biased decisions and see how their personal data is used to generate results. AI can no longer be a black Box. Here is a post that I did on
GDPR and Explainable AI
For details, please visit
Hitachi’s Feb 22, 2021 Announcement
#Blog
#Hu'sPlace
2 comments
5 views
Related Content
Building Trust with DataOps and XAI – Explainable AI
Hubert Yoshida
Added 08-21-2019
Blog Entry
The Societal Impacts of Artificial Intelligence
Hubert Yoshida
Added 07-02-2019
Blog Entry
What’s Next Session Presented At NEXT 2019
Hubert Yoshida
Added 10-28-2019
Blog Entry
GDPR, the Courts and Explainable AI
Hubert Yoshida
Added 12-20-2019
Blog Entry
Hitachi’s Application of AI and Machine learning
Hubert Yoshida
Added 07-09-2019
Blog Entry
Permalink
Comments
Chayan Sarkar
05-04-2022 11:54
Excellent article
Dipta Kundu
04-27-2022 02:48
Nicely Written
© Hitachi Vantara LLC 2023. All Rights Reserved.
Powered by Higher Logic