Speaker

Kirstie Whitaker

Kirstie Whitaker

Programme Director for Tools, Practices and Systems at The Alan Turing Institute

London, United Kingdom

Actions

Dr Kirstie Whitaker is a passionate advocate for making science "open for all" by promoting equity and inclusion for people from diverse backgrounds. She leads the Tools, Practices and Systems research programme at The Alan Turing Institute, the UK's National Institute for Data Science and Artificial Intelligence. Kirstie founded The Turing Way, an openly developed educational resource that enables researchers and citizen scientists across government, industry, academia and third sector organisations to embed open source practices into their work. She is the chair of the Turing Research Ethics Panel, co-lead investigator of the AI for Multiple Long Term Conditions Research Support Facility, co-lead investigator of the AutSPACEs citizen science platform, and co-investigator for the Turing's Data Safe Haven - an open source trusted research environment. Kirstie holds a PhD in Neuroscience from the University of California at Berkeley, is a Fulbright scholarship alumna, and was a 2016/17 Mozilla Fellow for Science.

Area of Expertise

  • Health & Medical
  • Physical & Life Sciences
  • Information & Communications Technology

Topics

  • Artificial Intelligence
  • Neuroscience
  • Equality and Diversity
  • Software Engineering
  • Open Data
  • Open Source Software

Operationalising the SAFE-D principles for Open Source AI

The SAFE-D principles (Leslie, 2019) were developed at the Alan Turing Institute, the UK's national institute for data science and artificial intelligence. They have been operationalised within the Turing's Research Ethics (TREx) institutional review process. In this panel we will advocate for the definition of Open Source AI to include reflections on each of these principles and present case studies of how AI projects are embedding these normative values in the delivery of their work.

The SAFE-D approach is anchored in the following five normative goals:

* **Safety and Sustainability** ensuring the responsible development, deployment, and use of a data-intensive system. From a technical perspective, this requires the system to be secure, robust, and reliable. And from a social sustainability perspective, this requires the data practices behind the system's production and use to be informed by ongoing consideration of the risk of exposing affected rights-holders to harms, continuous reflection on project context and impacts, ongoing stakeholder engagement and involvement, and change monitoring of the system from its deployment through to its retirement or deprovisioning.
* Our recommendation: Open source AI must be safe and sustainable, and open ways of working ensure that "many eyes make all bugs shallow". Having a broad and engaged community involved throughout the AI workflow keeps infrastructure more secure and keeps the purpose of the work aligned with the needs of the impacted stakeholders.
* **Accountability** can include specific forms of process transparency (e.g., as enacted through process logs or external auditing) that may be necessary for mechanisms of redress, or broader processes of responsible governance that seek to establish clear roles of responsibility where transparency may be inappropriate (e.g., confidential projects).
* Our recommendation: Open source AI should have clear accountability documentation and processes of raising concerns. These are already common practice in open source communities, including through codes of conduct and requests for comment for extensions or breaking changes.
* **Fairness and Non-Discrimination** are inseparably connected with sociolegal conceptions of equity and justice, which may emphasize a variety of features such as equitable outcomes or procedural fairness through bias mitigation, but also social and economic equality, diversity, and inclusiveness.
* Our recommendation: Open source AI should clearly communicate how the AI model and workflow are considering equity and justice. We hope that the open source AI community will embed existing tools for bias reporting into an interoperable open source AI ecosystem.
* **Explainability and Transparency** are key conditions for autonomous and informed decision-making in situations where data processing interacts with or influence human judgement and decision-making. Explainability goes beyond the ability to merely interpret the outcomes of a data-intensive system; it also depends on the ability to provide an accessible and relevant information base about the processes behind the outcome.
* Our recommendation: Open source AI should build on the strong history of transparency that is the foundation of the definition of open source: access to the source code, data, and documentation. We are confident that current open source ways of working will enhance transparency and explainability across the AI ecosystem.
* **Data quality, integrity, protection and privacy** must all be established to be confident that the data-intensive systems and models have been developed on secure grounds.
* Our recommendation: Even where data can not be made openly available, there is accountability and transparency around how the data is gathered and used.

The agenda for the session will be:

1. Prof David Leslie will give an overview of the SAFE-D principles.
2. Victoria Kwan will present how the SAFE-D principles have been operationalised for institutional review processes.
3. Dr Kirstie Whitaker will propose how the institutional process can be adapted for decentralised adoption through a shared definition of Open Source AI.

The final 20 minutes will be a panel responding to questions and comments from the audience.

Deep Dive: Defining Open Source AI Sessionize Event

September 2023

Kirstie Whitaker

Programme Director for Tools, Practices and Systems at The Alan Turing Institute

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top