Table of Contents |
---|
Overview
Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops.
Focus of the committee is on policies, guidelines, tooling and use cases by industry
Survey and contact current open source Trusted AI related projects to join LF AI efforts
Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI
Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology
Mail List
Please self subscribe to the mail list here at https://lists.lfai.foundation/g/trustedai-committee.
Or email trustedai-committee@lists.lfai.foundation for more information.
Participants
Initial Organizations Participating: AT&T, Amdocs, Ericsson, IBM, Orange, TechM, Tencent
Committee Chairs
Name | Region | Organization | Email Address | LF ID |
---|---|---|---|---|
Animesh Singh | North America | IBM | ||
Souad Ouali | Europe | Orange | ||
Jeff Cao | Asia | Tencent |
Committee Participants
Name | Organization | Email Address | LF ID |
---|---|---|---|
Ofer Hermoni | Amdocs | oferher@gmail.com | |
Mazin Gilbert | ATT | ||
Alka Roy | ATT | ||
Mikael Anneroth | Ericsson | ||
Alejandro Saucedo | The Institute for Ethical AI and Machine Learning | ||
Jim Spohrer | IBM | spohrer | |
Maureen McElaney | IBM | ||
Susan Malaika | IBM | sumalaika (but different email address) | |
Romeo Kienzler | IBM | ||
Francois Jezequel | Orange | ||
Nat Subramanian | Tech Mahindra | ||
Han Xiao | Tencent | ||
Wenjing Chu | Futurewei | chu.wenjing@gmail.com | |
Yassi Moghaddam | ISSIP | yassi@issip.org |
Assets
- All the assets being
Sub Categories
- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
- Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models
Projects
Name | Github | Website |
---|---|---|
AI Fairness 360 | ||
Adversarial Robustness 360 | ||
AI Explainability 360 |
Working Groups
Trusted AI Principles Working Group
Trusted AI Technical Working Group
Meetings
Zoom info : Trusted AI Committee meeting - Alternate Thursday, 4pm Paris, 10am ET, 7am PT
https://zoom.us/j/7659717866
How to Join: Visit the Trusted AI Committee Group Calendar to self subscribe to meetings.
Or email trustedai-committee@lists.lfai.foundation for more information.
Meeting Content (minutes / recording / slides / other):
Date | Agenda/Minutes | ||||
---|---|---|---|---|---|
Agenda
| |||||
Agenda
| |||||
Agenda
| |||||
Proposed Agenda
| |||||
Proposed Agenda
| |||||
Proposed Agenda
Notes from the call: | |||||
Proposed Agenda
Notes from the call: | |||||
Proposed Agenda:
Notes from call: | |||||
Attendees: Ofer, Alka, Francois, Nat, Han, Animesh, Jim, Maureen, Susan, Alejandro Summary
Detail
Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191017.md | |||||
Attendees: Animesh Singh (IBM), Maureen McElaney (IBM), Han Xiao (Tencent), Alejandro Saucedo, Mikael Anneroth (Eriksson), Ofer Hermoni (Amdocs) Animesh will check with Souad Ouali to ensure Orange wants to lead the Principles working group and host regular meetings. Committee members on the call were not included in the email chains that occurred so we need to confirm who is in charge and how communication will occur. The Technical working group has made progress but nothing concrete to report. A possible third working group could form around AI Standards. Notes from the call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191003.md | |||||
| Attendees:
Notes from call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20190919.md |