Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology

Mail List

Please self subscribe to the mail list here at https://lists.lfai.foundation/g/trustedai-committee

Or email trustedai-committee@lists.lfai.foundation for more information. 

Meetings

Zoom info : Trusted AI Committee meeting -  Alternate Thursday's,  10 PM Shenzen China, 4 PM Paris, 10 AM ET, 7 AM PT USA (updated for daylight savings time as needed)
https://zoom.us/j/7659717866

Participants

Initial Organizations Participating: IBM, Orange, AT&T, Amdocs, Ericsson, TechM, Tencent

Committee Chairs

Name

Region

Organization

Email Address

LF ID

Animesh Singh

North America

IBM

singhan@us.ibm.com

Souad Ouali

Europe

Orange

souad.ouali@orange.com

Jeff Cao

Asia

Tencent

jeffcao@tencent.com

Committee Participants

Name

Organization

 Email Address

LF ID

Ofer Hermoni

Amdocs 

oferher@gmail.com

Mazin Gilbert

ATT 

mazin@research.att.com 

Alka Roy

ATT 

AR6705@att.com 

Mikael Anneroth 

Ericsson

mikael.anneroth@ericsson.com 

Alejandro Saucedo

The Institute for Ethical AI and Machine Learning

a@ethical.institute

Jim Spohrer

IBM

 spohrer@us.ibm.com

spohrer

Saishruthi Swaminathan

IBM

 saishruthi.tn@ibm.com

Susan Malaika

IBM

 malaika@us.ibm.com

sumalaika (but different email address)

Romeo Kienzler

IBM

romeo.kienzler@ch.ibm.com

Francois Jezequel

Orange

francois.jezequel@orange.com 

Nat Subramanian

Tech Mahindra 

Natarajan.Subramanian@Techmahindra.com

 Han Xiao

Tencent

 hanhxiao@tencent.com

Wenjing Chu

Futurewei

chu.wenjing@gmail.com

Wenjing Chu

Yassi Moghaddam

ISSIP

yassi@issip.org

Assets

- All the assets being 

Sub Categories

- Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
- Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
- Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
- Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models

Projects

Name

Github

Website

AI Fairness 360

https://github.com/IBM/AIF360

http://aif360.mybluemix.net/

Adversarial Robustness 360

https://github.com/IBM/adversarial-robustness-toolbox

https://art-demo.mybluemix.net/

AI Explainability 360

https://github.com/IBM/AIX360

http://aix360.mybluemix.net

Working Groups

 Trusted AI Principles Working Group

 Trusted AI Technical Working Group

Meetings

Zoom info : Trusted AI Committee meeting -  Alternate Thursday's,  10 PM Shenzen China, 4 PM Paris, 10 AM ET, 7 AM PT USA (updated for daylight savings time as needed)
https://zoom.us/j/7659717866

How to Join: Visit the Trusted AI Committee Group Calendar to self subscribe to meetings.

Or email trustedai-committee@lists.lfai.foundation for more information. 

Meeting Content (minutes / recording / slides / other):

Date

Agenda/Minutes

Agenda

Table of Contents


 22.02.2024 The Transformation of the Trusted AI Committee to Responsible AI as a Generative AI Commons Workstream – LFAI & Data (lfaidata.foundation)

We switched to the new LFX system for meeting sheduling and recording. All previous mailing list subscribers should be listed as LFX Members of the Committee. At https://openprofile.dev/ you should be able to see all the recordings.


Image Added


(Screenshot of the calender of openprofile )



Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology




Assets




Meetings


calendar
id077c28f1-66c7-402f-b63a-a369bcc08087

Trusted AI Committee Monthly Meeting - 4th Thursday of the month (additional meetings as needed)

  • 10 AM ET USA (reference time, for all other meetings the time conversion has to be checked for daylight savings)
  • 10 PM Shenzen China
  • 7:30 PM India
  • 4 PM Paris
  • 7 AM PT USA (updated for daylight savings time as needed)

Zoom channel : 

https://zoom-lfx.platform.linuxfoundation.org/meeting/94505370068?password=bde61b75-05ae-468f-9107-7383d8f3e449







Committee Chairs

Name

Region

Organization

Email Address

LF ID

LinkedIn

Andreas Fehlner

Europe

ONNX

fehlner@arcor.de 

https://www.linkedin.com/in/andreas-fehlner-60499971

Susan Malaika

America

IBM

malaika@us.ibm.com

Susan Malaika 
(but different email address)

https://www.linkedin.com/in/susanmalaika

Suparna Bhattacharya

AsiaHPEsuparna.bhattacharya@hpe.comSuparna Bhattacharya https://www.linkedin.com/in/suparna-bhattacharya-5a7798b

Adrian Gonzalez Sanchez

Europe

HEC Montreal / Microsoft / OdiseIA

adrian.gonzalez-sanchez@hec.ca 

Adrian Gonzalez Sanchez(but different email address)

https://www.linkedin.com/in/adriangs86




Participants

Initial Organizations Participating: IBM, Orange, AT&T, Amdocs, Ericsson, TechM, Tencent


Name

Organization

 Email Address

LF ID

Ofer Hermoni

PieEye

oferher@gmail.com

Ofer Hermoni 

Mazin Gilbert

ATT 

mazin@research.att.com 

...

Alka Roy

Responsible Innovation Project

alka@responsibleproject.com 

...

Mikael Anneroth 

Ericsson

mikael.anneroth@ericsson.com 

...

Alejandro Saucedo

The Institute for Ethical AI and Machine Learning

a@ethical.institute

Alejandro Saucedo 

Jim Spohrer

Retired IBM, ISSIP.org

spohrer@gmail.com

Jim Spohrer 

Saishruthi Swaminathan

IBM

saishruthi.tn@ibm.com

Saishruthi Swaminathan 

Susan Malaika

IBM

malaika@us.ibm.com

sumalaika (but different email address)

Romeo Kienzler

IBM

romeo.kienzler@ch.ibm.com

Romeo Kienzler 

Francois Jezequel

Orange

francois.jezequel@orange.com 

Francois-J 

Nat Subramanian

Tech Mahindra 

Natarajan.Subramanian@Techmahindra.com

Natarajanc 

Han Xiao

Tencent

hanhxiao@tencent.com

...

Wenjing Chu

Futurewei

chu.wenjing@gmail.com

Wenjing Chu

Yassi Moghaddam

ISSIP

yassi@issip.org

Yassi Moghaddam 
Animesh SinghIBMsinghan@us.ibm.comAnimesh Singh (Deactivated) 
Souad OualiOrangesouad.ouali@orange.comSouad Ouali 
Jeff CaoTencentjeffcao@tencent.com...
Ron DoyleBroadcomron.doyle@broadcom.com




Sub Categories

  • Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations
  • Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks
  • Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options
  • Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models




Projects




Meeting Content (minutes / recording / slides / other)


Date

Agenda/Minutes

Thu, Sep 28 @ 4:00 pm

Zoom Recording Link: https://zoom.us/rec/play/9wmVWdg8wlCuv3E8CVNfKI4uxZA-lHC5RCdwZikVHj4zb3cvvQVw7sE0DQ2vw7XgXT2UgmrFelOa3FEW._gQfopF1nWY820Pf?canPlayFromShare=true&from=share_recording_detail&continueMode=true&componentName=rec-play&originRequestUrl=https%3A%2F%2Fzoom.us%2Frec%2Fshare%2FWCKI--kYX2WJWEm_pa39jWYw8YCxMxxpFc5nHXqzXXaE6Uo_6SUQMLuX1rznqX8s.LBD6697U22Q-rk55


Recording could be also accessed by http://openprofile.dev

Trusted AI Committee


 

* Preparing for September 7 TAC session
* Migration to LFX
* Anything on Generative AI

View file
nameGMT20230824-140219_Recording_3686x2304.mp4
height250

 

Friday August 11, at 10am US Eastern 

 Working Session to prepare for the TAC Trusted AI Committee presentation on Thursday September 7 at 9am US Eastern

 

  • Agenda
    Suparna @Suparna Bhattacharya & Gabe @Rodolfo (Gabe) Esteves etc report on  CMF developments with ONNX - 10 mins
  • Suparna @Suparna Bhattacharya & Vijay @Vijay Arya etc report on CMF developments with AI-Explainability-360  - 10 mins
  • Vijay @Vijay Arya on what's new in AI-Explainability-360 https://github.com/Trusted-AI/AIX360/releases including the time series feature - 10 mins
  • Question to @Vijay Arya from @Jen Shelby : Is this something we want to create a social post for?
  • Andreas's guest @Andreas Fehlner (Martin Nocker)- Homomorphically Encrypted Machine learning with ONNX models - 15 mins
  • Review the TAC materials - 10 mins (Initial draft agenda attached in slack) - We'll discuss the date and the content
  • Adrian @Adrian Gonzalez Sanchez, Ofer @Ofer Hermoni, Ali @Ali Hashmi, Phaedra @Phaedra Boinodiris - Blog news and anything else
    --------------
    Presentation (Martin Nocker, 15min): HE-MAN – Homomorphically Encrypted MAchine learning with oNnx models. Machine learning (ML) algorithms play a crucial role in the success of products and services, especially with the abundance of data available. Fully homomorphic encryption (FHE) is a promising technique that enables individuals to use ML services without sacrificing privacy. However, integrating FHE into ML applications remains challenging. Existing implementations lack easy integration with ML frameworks and often support only specific models. To address these challenges, we present HE-MAN, an open-source two-pa(rty machine learning toolset. HE-MAN facilitates privacy-preserving inference with ONNX models and homomorphically encrypted data. With HE-MAN, both the model and input data remain undisclosed. Notably, HE-MAN offers seamless support for a wide range of ML models in the ONNX format out of the box. We evaluate the performance of HE-MAN on various network architectures and provide accuracy and latency metrics for homomorphically encrypted inference.

Trusted_AI_Committee_2023_07_27_Handout_ONNX_Nocker.pdf

TrustedAI_20230727.mp4  -Topics included CMF Developments with ONNX and  Homomorphically Encrypted Machine learning with ONNX models


 

CMF and AI Explainability led by Suparna Bhattacharya and Vijay Arya - with MaryAnn, Gabe and Soumi Das
ACTION for the Committee - identify 2 use cases that drive the integration of ONNX, CMF, Explainability - illustrating the benefits 
Trusted AI Working Session-CMF and AI Explainability-20230713.mp4

View file
nameTrusted AI Working Session-CMF and AI Explainability-20230713.txt
height250

 


MarkTechPost,  Jean-marc Mommessin

Active and Continuous Learning for Trusted AI, Martin Foltin 

AI Explainability, Vijay Arya

Recording (video) Zoom

 

  • Open Voice Network Follow-On - 20 minutes @Lucy Hyde invites John Stine & Open Voice Network folks e.g., Nathan Southern

Recording (video) Zoom

Recording (video) confluence

Slides - ONNX 

View file
nameintel_responsible_ai_at_onnx_--_metadata_for_provenance.pptx
height250

 

Part 0 - Metadata / Lineage / Provenance topic from Suparna Bhattacharya & Aalap Tripathy & Ann Mary Roy & Professor Soranghsu Bhattacharya & Team
Part 1 - Open Voice Network - Introductions https://openvoicenetwork.org  
  • Open Voice NetworkImage AddedOpen Voice Network, The Open Voice Network, Voice assistance worthy of user trust—created in the inclusive, open-source style you’d expect from a community of The Linux Foundation.

Part 2 - Identify small steps/publications to motivate concrete actions over 2023 in the context of these pillars:
Technology | Education | Regulations | Shifting power : Librarians / Ontologies / Tools
Possible Publications / Blogs
  • Interplay of Big Dreams and Small Steps |
    Inventory of trustworthy tools and how they fit - into the areas of ACT |Metadata, Lineage and Provenance tools in particular |
  • Giving power to people who don't have it - Phaedra @Phaedra Boinodiris and Ofer @Ofer Hermoni
  • -- Why give power ; The vision - including why important to everyone including companies
  • More small steps to take / blogs-articles to write
Part 3 - Review goals of committee taken from https://lf-aidata.atlassian.net/wiki/display/DL/Trusted+AI+Committee - including whether we want to go ahead with badges
  • Overview
  • Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops.
  • Focus of the committee is on policies, guidelines, tooling and use cases by industry
  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts
  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data
  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology
Part 4 - Any highlights from the US Senate Subcommittee on the Judiciary - Oversight on AI hearing

Part 5 - Any Other Business


Recording (video)

View file
nameSourangshu Bhattacharya_LFAI-presentation (1).pdf
height250

 

Join the Trusted AI Committee at the LF-AI for the upcoming session on April 27 at 10am Eastern where you will hear from:

  1. Adrian Gonzalez Sanchez: From Regulation to Realization – Linking ACT (European Union AI Act) to internal governance in companies
  2. Phaedra Boinodiris: Risks of generative AI and strategies to mitigate
  3. All: Explore what was presented and suggest next steps
  4. All : Update the Trusted AI Committee list https://lf-aidata.atlassian.net/wiki/display/DL/Trusted+AI+Committee
  5. Suparna Bhattacharya: Call to Action 

------------------------------------------------------------------------------------
We all have prework to do! Please listen to these videos:


We look forward to your contributions

Recording (video)

Recording (audio)

 

Proposed agenda (ET)

  • 10am - Kick off Meeting
    • Housekeeping items:
      • Wiki page update
      • Online recording
      • Others
  • 10:05 - Generative AI and New Regulations - Adrian Gonzalez Sanchez 
    • Presentation (PDF) 
      View file
      name20230406 - Generative AI and New Regulations - Adrian Gonzalez Sanchez.pdf
      height250


  • 10:15 - Discussion
  • 10:30 - Formulate any next steps
  • 10:35 - News from the open source Trusted AI projects
  • 10:45 - Any other business

Call Lead: Susan Malaika 

 

Invitees: Beat Buesser; Phaedra Boinodiris ; Alexy Khrabov, David Radley, Adrian Gonzalez Sanchez

Optional: Ofer Hermoni , Nancy Rausch, Alejandro Saucedo, Sri Krishnamurthy, Andreas Fehlner, Suparna Bhattacharya

Attendees: Beat Buesser, Phaedra Boinodiris, Alexy Khrabov, Adrian Gonzalez Sanchez, Ofer Hermoni, Andreas Fehlne


Discussion

  • Phaedra - Consider : Large Language Models opportunity and risks - in the context trusted ai - how to mitigate risk -
  • Adrian: European Union AI Act https://artificialintelligenceact.eu/
  • Suparna -what does it mean foundation models in general, where language models is one example. Another related area is data centric trustworthy AI in this context
  • Alexy - Science - More work on understanding at the scientific way (e.g., Validation in Medical Context) - software engineering ad hoc driven by practice
  • Fast Forward - what’s next for ChatGPT

  • Andreas : File Formats for Models - additional needs for Trustworthy AI - in addition to Lineage
  • Idea: Create a PoV - Trustworthy AI for Generative application - Take AI ACT approach
  • Gaps in EU AT Act: https://venturebeat.com/ai/coming-ai-regulation-may-not-protect-us-from-dangerous-ai/ useful source


Next steps

  • Set up a series of calls through the LF-AI Trusted AI mechanisms to have the following presenters
  • Run 3 sessions with presentations
  • Then create a presentation and/or document
  • Create the synthesis -A Point of View on : Trustworthy AI for Generative Applications

  • Occasionally the open source project leaders are invited to the call …
  • ACTION: Adrian will schedule next meeting

 

malaika@us.ibm.com has scheduled a call on Monday October 31, 2022 to determine next steps for the committee due to a change in leadership- please connect with Susan if you would like to be added to the call

The group met once a month - on the third Thursday each month at 10am US Eastern. See notes below for prior calls . Activities of the committee included:

  • Reviewing all trusted AI related projects at the LF-AI and making suggestions - e.g.,
  • AI Fairness 360
  • AI Explainability
  • Adversarial Robustness Toolbox
  • Related projects such as Egeria, Open Lineage etc
  • Reviewing the activities of the subgroups - known as working groups - and making suggestions
  • MLSecOps WG
  • Principles WG (completed)
  • Highlighting new projects that should/could be suitable for the LF-AI
  • Identifying trends in the industry in Trusted AI that should be of interest to the LF-AI
  • Initiating Working Groups within the Trusted AI Committee at the LF-AI to address particular issues


Reporting to:

  • The LF-AI Board of Governors on the activities of the Committee and taking guidance from the board - next meeting on Nov 1, 2022
  • The LF-AI TAC - making suggestions to the TAC and taking guidance


Questions:

  • Should the Trusted AI Committee continue to meet once a month with similar goals?
  • Who will:
  • Identify the overall program and approach for 2023 - should that be the subject of the next Trusted AI Committee Call?
    • Host the meetings?
    • Identify the speakers?
    • Make sure all is set speakers and community?
    • Should the Trusted AI Committee take an interest in the activities of the PyTorch Consortium?


Invitees and interested parties on the call on October 31, 2022

  • HPE Suparna Battacharya
  • IBM Beat Buesser David Radley Christian Kadner Ruchi Mahindru Susan Malaika Cheranellore(Vasu) Vasu William Bittles ;
  • Beat leads the Adversarial Robustness Toolbox - a graduated project at the LF-AI
  • David works on Egeria Project - a graduated project at the LF-AI
  • William is involved in open lineage
  • Susan co-led Principles WG - a subgroup of Trusted AI Committee - work completed
  • Institute for Ethical AI Alejandro Saucedo Alejandro is also at Seldon - Leads MLSec Working Group - a subgroup of Trusted AI Committee
  • QuantUniversity Sri Krishnamurthy
  • SAS Nancy Rausch Currently chair of LF-AI and Data TAC
  • Trumpf Andreas Fehlner

 

Recording (video)

Recording (audio)

Principles report  =(R)REPEATS

Recording (audio)

Recording (video)

LFAI Trusted AI Committee Structure and Schedule: Animesh Singh

Real World Trusted AI Usecase and Implementation in Financial Industry: Stacey Ronaghan

AIF360 Update: Samuel Hoffman

AIX360 Update: Vijay Arya

ART Update: Beat Buesser

 

Recording (audio)

Recording (video)

Setting up Trusted AI TSC

Principles Update

Coursera Course Update

Calendar discussion - Europe and Asia Friendly

Recording

Walkthrough of LFAI Trusted AI Website and github location of projects

Trusted AI Video Series

Trusted AI Course in collaboration with University of Pennsylvania

Recording

Z-Inspection: A holistic and analytic process to assess Ethical AI - Roberto Zicari - University of Frankfurt, Germany

Age-At-Home - the exemplar of TrustedAI, David Martin, Hacker in Charge at motion-ai.com

Plotly Demo with SHAP and AIX360 - Xing Han, Plot.ly

  • Explain the Tips dataset with SHAP: Article, Demo
  • Heart Disease Classification with AIX360: Post, Demo
  • Community-made SHAP-to-dashboard API: Post

Swiss Digital Trust Label - short summary - Romeo Kienzler, IBM

  • Announced by Swiss President Doris Leuthard at WEF 2nd Sept. 2019
  • Geneva based initiative for sustainable and fair treatment of data
  • Among others, those companies are involved already Google, Uber, IBM, Microsoft, Facebook, Roche, Mozilla, Booking.com, UBS, Credit Suisse, Zurich, Siemens IKRK,  EPFL,  ETH, UNO
  • Booking.com, Credit Suisse, IBM, Swiss Re, SBB, Kudelski and Canton of Waadt to deliver pilot

Watson OpenScale and Trusted AI - Eric Martens, IBM

LFAI Ethics Training course 
https://github.com/lfai/ai-ethics-training/blob/master/ai-ethics-outline.md

Recording:

View file
nameLFAI TAIC 20200723 Openscale and Video zoom_0.mp4
height250

Slack: https://lfaifoundation.slack.com/archives/CPS6Q1E8G/p1595515808086900

  • Montreal AI Ethics Institute Presentation

  • Status on Trusted AI projects in Open Governance

  • Principles Working Group update - Susan Malaika

  • Trusted AI Committee activities summarization for Governing board - Animesh Singh

Agenda

  • Swaminathan Chandrasekaran, KPMG Managing Director would be talking about how they are working with practitioners in the field on their AI Governance and Trusted AI needs.

  • Susan Malaika from IBM will be giving an update from the Principles Working Group, and progress there.

Agenda

  • Saishruthi Swaminathan to do a presentation on AI Transparency in Marketplace

  • Francois

Jezequel to
  • Jezequel to present on Orange Responsible AI initiative.

Agenda

  • Andrew and Tommy did a deep dive in Kubeflow Serving and Trusted AI Integration

  • Principles Working Group discussion

Proposed Agenda

  • AI for People is focused on the intersection of AI and Society with a lot of commonality with the

focus areas
  • focus areas of our committee. Marta will be joining to present their organization and what

they are
  • they are working on.

  • Proposal of Use Case to be tested by AT&T using Apache Nifi and AIF360  (romeo)

Proposed Agenda

  • Introduction to baseline data set for AI Bias detection (romeo)

  • Exemplar walk-through: retrospective bias detection with Apache Nifi and AIF360 (romeo)

  • Principles Working Group Status Update (Susan)

Proposed Agenda

Discuss AIF360 work around SKLearn community (Samuel Hoffman, IBM Research demo)

Discuss "Many organizations have principles documents, and a bit of backlash - for not enough practical examples."

Resource

  • Watch updates on production ML with Alejandro Saucedo done with Susan Malaika on the Cognitive Systems Institute call:
    Widget Connector
    urlhttps://www.youtube.com/watch?v=nuvRMFM8USE

Notes from the call:



Proposed Agenda

  • Meeting notes are now in GitHub here: https://github.com/lfai/trusted-ai/tree/master/committee-meeting-notes

  • Since we don't record and share our committee meetings should our committee channel in Slack be made private for asyncronous conversation outside these calls?

  • Introduction of MLOps  in IBM Trusted AI projects

  • Design thinking around integrating Trusted AI projects in Kubeflow Serving

Notes from the call:

 

https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191212.md

Proposed Agenda:

  • Jim to get feedback from LFAI Board meeting

  • Romeo to demo AIF360 -Nifi Integration + feedback from his talk at OSS Lyon

  • Alka to present AT&T Working doc

  • Discuss holiday week meeting potential conflicts (28 Nov - US Holiday, 26 Dec - Day after Christmas)

Notes from call:


https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191114.md

Attendees:

Ofer, Alka, Francois, Nat, Han, Animesh, Jim, Maureen, Susan, Alejandro

Summary

  • Animesh walked through the draft slides (to be presented in Lyon to LFAI governing board about TAIC)

  • Discussion of changes to make

  • Discussion of members, processes, and schedules

Detail

  • Jim will put slides in Google Doc and share with all participants

  • Susan is exploring a slack channel for communications

  • Trust and Responsibility, Color, Icons to add Amdocs, Alejandro's Institute

  • Next call Cancelled (31 October) as many committee members will be at OSS EU and TensorFlow World

Notes from the call:

https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191017.md

Attendees:

Animesh Singh (IBM), Maureen McElaney (IBM), Han Xiao (Tencent), Alejandro Saucedo, Mikael Anneroth (Eriksson), Ofer Hermoni (Amdocs)

Animesh will check with Souad Ouali to ensure Orange wants to lead the Principles working group and host regular meetings. Committee members on the call were not included in the email chains that occurred so we need to confirm who is in charge and how communication will occur.

The Technical working group has made progress but nothing concrete to report.

A possible third working group could form around AI Standards.

Notes from the call:

https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20191003.md

 

Attendees:


Ibrahim. H, Nat .S, Animesh.S, Alka.R, Jim.S, Francios. J, Jeff. C, Maureen. M, Mikael. A, Ofer. H, Romeo.K

  • Goals defined for the meeting:

Working Group Names and Leads have been confirmed:

  • Principles, lead: Souad Ouali (Orange France) with members from Orange, AT&T, Tech Mahindra, Tencent, IBM, Ericsson, Amdocs.
  • Technical, lead: Romeo Kienzler (IBM Switzerland) with members from IBM, AT&T, Tech Mahindra, Tencent, Ericsson, Amdocs, Orange.
  • Working groups will have a weekly meeting to make progress.  First read out to LF AI governing board will be Oct 31 in Lyon France.
  • The Principles team will study the existing material from companies, governments, and professional associations (IEEE), and some up with set that can be shared with the technical team for feedback as a first step.  We need to identify and compile the existing materials.
  • The Technical team is working on Acuomos+Angel+AIF360 integration demonstration.
3.

Possible Discussion about third working group

Discussion about LFAI day in Paris


More next steps

Will begin recording meetings in future calls.


Notes from call: https://github.com/lfai/trusted-ai/blob/master/committee-meeting-notes/notes-20190919.md