Brief introduction to the Trustworthiness Research Working Group:

The Trustworthiness Research Working Group was established on 6 August 2020, during the first planetary meeting of the National Information Security Standardization Technical Committee’s Subcommittee for Artificial Intelligence (TC 28/SC 42). Its work focuses on research on elements of trustworthiness in Artificial Intelligence (AI) systems; it explores the testing technologies, evaluation methods, and application channels for all the elements and the whole process of AI systems. Furthermore, the mission of the Working Group is to improve the trustworthiness capability of AI systems from multi-dimensions: hardware, datasets, algorithms, systems, etc.

 

Group leader: Xue Yunzhi from Institute of Software Chinese Academy of Sciences

Deputy group leader: Wang Xiaoyu from Intellifusion

Deputy group leader: Jiang Hui from Shanghai SenseTime Intelligent Technology Co., Ltd.

Deputy group leader: Gao Xuesong from Qingdao Haixin Electronics Industry Holding Co., Ltd.

(Secretariat Contact: Li Binbin 15624952070 / 010-64102859)

 

Achievements in Trustworthy Research:

  • Analysis Report on AI Ethical Risks
  • T/CESA 1026—2018 Artificial intelligence – Assessment specification for deep learning algorithms
  • T/CESA 1036—2019 Information technology – Artificial intelligence – Quality elements and testing methods of machine learning models and systems

 

The following table summarises the ongoing work of the Working Group:

 

Category Name
Report White paper on trustworthy AI standardisation
Report Practical cases of R&D of trustworthy AI technology
Report Research report on fairness and supervision of AI algorithms
Standard Artificial intelligence – Risk assessment and management
Standard Technical framework for trustworthy AI technology
Standard Artificial intelligence – Guidelines for ethics and social relations
Standard Artificial intelligence – Assessment for dataset quality
Standard Artificial intelligence – Technical requirements for privacy protection in machine learning systems
Standard Artificial intelligence – Robustness requirements and evaluation methods for neural networks
Standard Artificial intelligence – Trustworthy technical requirements for computing devices
Test Risks assessment of AI products