Responsible AI Leadership Consortium

Expertise for the AI Frontier | Empowering Ethical Innovation

Participation in the Responsible AI Consortium is by invitation

The Responsible AI Leadership Consortium is an inspiring initiative that unites leaders of AI strategy and acceleration, along with experts in data science and analytics from the financial services industry, technology startups, government, and academia. Building trust and credibility in AI is a defining issue of our times. 

This collaborative consortium aims to harness the collective wisdom and experience of its members to develop and promote the adoption of ethical, transparent, and accountable AI practices.

Through the synergy of their combined efforts, these pioneering individuals are working diligently to shape the future of AI, ensuring that its growth and evolution are grounded in a strong foundation of responsibility and shared values.

Leaders from a number of organizations such as …. Goldman Sachs, Bank of America, Morgan Stanley, MasterCard, Truist, Key Bank, Ally Bank, Nationwide Insurance, Randstad, Fox Corporation, Raytheon, Pfizer, the World Bank to name a few are participating in these conversations.

We are very excited to take this collective wisdom and support the implementation of Responsible AI by sharing best practices and providing input for meaningful regulation and policy making.

Activities

Consortium members will gather at a leadership kickoff dinner in New York (April 2023). Network with pioneering leaders and trailblazers while discussing forward looking thoughts on Responsible AI adoption within industry and government.

Participation

Data Drift

Detect Bias

Explainability

Model Risk Management

Validation | LLM-Ops

Consortium members and their teams can participate by engaging on Responsible AI Leadership issues surrounding an R-AI strategy, Algorithmic Risk, responsiveness to fast-moving AI trends, Drift & bias monitoring, and downstream Al/Data Science and ML-Ops strategies surrounding the following:

Data/Model Drift (Detect & Mitigate)
Bias/Fairness Metrics
Transparency & Explain-ability (XAI)
Model Risk Management
Model Validation

Use-cases detailing how a model black box problem was solved and how explainability was achieved for LLMs and fine-tuning transformer models will be featured in Consortium wide literature: gain wide exposure in industry and government. 

Responsible AI in ML Pipeline

AI Model Design

Develop data collection protocols that include considerations for ethical data collection, such as ensuring that data is obtained with consent and that it is not biased; implement data cleaning and preprocessing techniques that account for fairness, transparency, and accountability; conduct data audit to ensure that it is ready for use in machine learning algorithms

Model Deployment

Conduct a model audit to identify any biases or unfairness; test the model for accuracy, fairness, and transparency; ensure that the model is explainable and interpretable, and that it can be easily understood by stakeholders.

Monitoring & Observability

Monitor the model’s performance regularly; implement feedback loop to allow users to report and issues or concerns; develop a plan for handling any ethical or legal concerns that may arise from model’s deployment

Sustainability

Establish clear criteria for determining when the model needs to be updates or retired to ensure that it remains responsible, ethical, and fair; implement a process for assessing the model’s impact on society and the environment; develop a plan for ensuring the model’s sustainability over time, including regular updates and maintenance.

Focus Background