GAC Responsible AI Workstream

GAC Responsible AI Workstream



Feb 22, 2024 22.02.2024 The Transformation of the Trusted AI Committee to Responsible AI as a Generative AI Commons Workstream – LFAI & Data (lfaidata.foundation)

Sep 8, 2023 We switched to the new LFX system for meeting sheduling and recording. All previous mailing list subscribers should be listed as LFX Members of the Committee. At https://openprofile.dev/ you should be able to see all the recordings.





(Screenshot of the calender of openprofile )





Overview

Below is an overview of the current discussion topics within the Trusted AI Committee. Further updates will follow as the committee work develops. 

  • Focus of the committee is on policies, guidelines, tooling and use cases by industry

  • Survey and contact current open source Trusted AI related projects to join LF AI & Data efforts 

  • Create a badging or certification process for open source projects that meet the Trusted AI policies/guidelines defined by LF AI & Data

  • Create a document that describes the basic concepts and definitions in relation to Trusted AI and also aims to standardize the vocabulary/terminology






Assets






Meetings



This calendar is read-only. Please ask your administrator to renew your Team Calendars subscription.

Trusted AI Committee Monthly Meeting - 4th Thursday of the month (additional meetings as needed)

  • 10 AM ET USA (reference time, for all other meetings the time conversion has to be checked for daylight savings)

  • 10 PM Shenzen China

  • 7:30 PM India

  • 4 PM Paris

  • 7 AM PT USA (updated for daylight savings time as needed)

Zoom channel : 

https://zoom-lfx.platform.linuxfoundation.org/meeting/94505370068?password=bde61b75-05ae-468f-9107-7383d8f3e449












Committee Chairs

Name

Region

Organization

Email Address

LF ID

LinkedIn

Name

Region

Organization

Email Address

LF ID

LinkedIn

Andreas Fehlner

Europe

ONNX

fehlner@arcor.de 

@Andreas Fehlner 

https://www.linkedin.com/in/andreas-fehlner-60499971

Susan Malaika

America

IBM

malaika@us.ibm.com

@Susan Malaika 
(but different email address)

https://www.linkedin.com/in/susanmalaika

Suparna Bhattacharya

Asia

HPE

suparna.bhattacharya@hpe.com

@Suparna Bhattacharya 

https://www.linkedin.com/in/suparna-bhattacharya-5a7798b

Adrian Gonzalez Sanchez

Europe

HEC Montreal / Microsoft / OdiseIA

adrian.gonzalez-sanchez@hec.ca 

@Adrian Gonzalez Sanchez(but different email address)

https://www.linkedin.com/in/adriangs86






Participants

Initial Organizations Participating: IBM, Orange, AT&T, Amdocs, Ericsson, TechM, Tencent



Name

Organization

 Email Address

LF ID

Name

Organization

 Email Address

LF ID

Ofer Hermoni

PieEye

oferher@gmail.com

@Ofer Hermoni 

Mazin Gilbert

ATT 

mazin@research.att.com 

...

Alka Roy

Responsible Innovation Project

alka@responsibleproject.com 

...

Mikael Anneroth 

Ericsson

mikael.anneroth@ericsson.com 

...

Alejandro Saucedo

The Institute for Ethical AI and Machine Learning

a@ethical.institute

@Alejandro Saucedo 

Jim Spohrer

Retired IBM, ISSIP.org

spohrer@gmail.com

@Jim Spohrer 

Saishruthi Swaminathan

IBM

saishruthi.tn@ibm.com

@Saishruthi Swaminathan 

Susan Malaika

IBM

malaika@us.ibm.com

sumalaika (but different email address)

Romeo Kienzler

IBM

romeo.kienzler@ch.ibm.com

@Romeo Kienzler 

Francois Jezequel

Orange

francois.jezequel@orange.com 

@Francois-J 

Nat Subramanian

Tech Mahindra 

Natarajan.Subramanian@Techmahindra.com

@Natarajanc 

Han Xiao

Tencent

hanhxiao@tencent.com

...

Wenjing Chu

Futurewei

chu.wenjing@gmail.com

@Wenjing Chu

Yassi Moghaddam

ISSIP

yassi@issip.org

@Yassi Moghaddam 

Animesh Singh

IBM

singhan@us.ibm.com

@Animesh Singh (Deactivated) 

Souad Ouali

Orange

souad.ouali@orange.com

@Souad Ouali 

Jeff Cao

Tencent

jeffcao@tencent.com

...

Ron Doyle

Broadcom

ron.doyle@broadcom.com








Sub Categories

  • Fairness: Methods to detect and mitigate bias in datasets and models, including bias against known protected populations

  • Robustness: Methods to detect alterations/tampering with datasets and models, including alterations from known adversarial attacks

  • Explainability: Methods to enhance understandability/interpretability by persona/roles in process of AI model outcomes/decision recommendations, including ranking and debating results/decision options

  • Lineage: Methods to ensure provenance of datasets and AI models, including reproducibility of generated datasets and AI models






Projects






Meeting Content (minutes / recording / slides / other)



Date

Agenda/Minutes

Date

Agenda/Minutes

Thu, Sep 28 @ 4:00 pm

Zoom Recording Link: https://zoom.us/rec/play/9wmVWdg8wlCuv3E8CVNfKI4uxZA-lHC5RCdwZikVHj4zb3cvvQVw7sE0DQ2vw7XgXT2UgmrFelOa3FEW._gQfopF1nWY820Pf?canPlayFromShare=true&from=share_recording_detail&continueMode=true&componentName=rec-play&originRequestUrl=https%3A%2F%2Fzoom.us%2Frec%2Fshare%2FWCKI--kYX2WJWEm_pa39jWYw8YCxMxxpFc5nHXqzXXaE6Uo_6SUQMLuX1rznqX8s.LBD6697U22Q-rk55



Recording could be also accessed by http://openprofile.dev

Trusted AI Committee



Aug 24, 2023 

* Preparing for September 7 TAC session
* Migration to LFX
* Anything on Generative AI

Aug 11, 2023 

Friday August 11, at 10am US Eastern 

 Working Session to prepare for the TAC Trusted AI Committee presentation on Thursday September 7 at 9am US Eastern

Jul 27, 2023 

Trusted_AI_Committee_2023_07_27_Handout_ONNX_Nocker.pdf

TrustedAI_20230727.mp4  -Topics included CMF Developments with ONNX and  Homomorphically Encrypted Machine learning with ONNX models



Jul 13, 2023 

CMF and AI Explainability led by Suparna Bhattacharya and Vijay Arya - with MaryAnn, Gabe and Soumi Das
ACTION for the Committee - identify 2 use cases that drive the integration of ONNX, CMF, Explainability - illustrating the benefits 
Trusted AI Working Session-CMF and AI Explainability-20230713.mp4

Jun 22, 2023 



MarkTechPost,  Jean-marc Mommessin

Active and Continuous Learning for Trusted AI, Martin Foltin 

AI Explainability, Vijay Arya

Recording (video) Zoom

Jun 8, 2023 

  • Open Voice Network Follow-On - 20 minutes @Lucy Hyde invites John Stine & Open Voice Network folks e.g., Nathan Southern

Recording (video) Zoom

Recording (video) confluence

Slides - ONNX 

May 25, 2023 

Part 0 - Metadata / Lineage / Provenance topic from Suparna Bhattacharya & Aalap Tripathy & Ann Mary Roy & Professor Soranghsu Bhattacharya & Team
Part 1 - Open Voice Network - Introductions https://openvoicenetwork.org  

  • Open Voice Network, The Open Voice Network, Voice assistance worthy of user trust—created in the inclusive, open-source style you’d expect from a community of The Linux Foundation.


Part 2 - Identify small steps/publications to motivate concrete actions over 2023 in the context of these pillars:
Technology | Education | Regulations | Shifting power : Librarians / Ontologies / Tools
Possible Publications / Blogs

Part 3 - Review goals of committee taken from https://lf-aidata.atlassian.net/wiki/display/DL/Trusted+AI+Committee - including whether we want to go ahead with badges

Part 4 - Any highlights from the US Senate Subcommittee on the Judiciary - Oversight on AI hearing

Part 5 - Any Other Business



Recording (video)

Apr 26, 2023 

Join the Trusted AI Committee at the LF-AI for the upcoming session on April 27 at 10am Eastern where you will hear from:


------------------------------------------------------------------------------------
We all have prework to do! Please listen to these videos:


We look forward to your contributions

Recording (video)

Recording (audio)

Apr 6, 2023 

Proposed agenda (ET)

  • 10am - Kick off Meeting

    • Housekeeping items:

      • Wiki page update

      • Online recording

      • Others

  • 10:05 - Generative AI and New Regulations - @Adrian Gonzalez Sanchez 

    • Presentation (PDF) 



  • 10:15 - Discussion

  • 10:30 - Formulate any next steps

  • 10:35 - News from the open source Trusted AI projects

  • 10:45 - Any other business

Call Lead: @Susan Malaika 

Feb 8, 2023 

Invitees: Beat Buesser; Phaedra Boinodiris ; Alexy Khrabov, David Radley, Adrian Gonzalez Sanchez

Optional: Ofer Hermoni , Nancy Rausch, Alejandro Saucedo, Sri Krishnamurthy, Andreas Fehlner, Suparna Bhattacharya

Attendees: Beat Buesser, Phaedra Boinodiris, Alexy Khrabov, Adrian Gonzalez Sanchez, Ofer Hermoni, Andreas Fehlne



Discussion

  • Phaedra - Consider : Large Language Models opportunity and risks - in the context trusted ai - how to mitigate risk -

  • Adrian: European Union AI Act https://artificialintelligenceact.eu/

  • Suparna -what does it mean foundation models in general, where language models is one example. Another related area is data centric trustworthy AI in this context

  • Alexy - Science - More work on understanding at the scientific way (e.g., Validation in Medical Context) - software engineering ad hoc driven by practice

  • Fast Forward - what’s next for ChatGPT

  • Andreas : File Formats for Models - additional needs for Trustworthy AI - in addition to Lineage