Session

Ethical AI for Software Teams

As AI technology continues to advance, ensuring ethical development and testing is more critical than ever. The presentation explores why human oversight is vital both before and after AI system launches. We delve into the importance of pre-launch testing, where human testers help identify biases, evaluate cultural sensitivity, and fine-tune responses to align with ethical standards.

Our Ethical AI Framework is at the forefront of responsible technology development. In an era where AI's impact on society is profound, our commitment to fairness, transparency, and accountability is unwavering. This presentation delves into the core principles of our framework, which encompasses pre-launch testing, post-launch monitoring, and active user contributions.

We recognize that ethical AI development is an ongoing journey. Pre-launch testing is instrumental in identifying biases, cultural sensitivities, and fine-tuning responses to align with ethical standards. Post-launch monitoring ensures that our AI systems evolve responsibly, and user contributions play a pivotal role in reporting and rectifying issues.

Furthermore, we're excited to share our open-source Ethical AI Testing Framework, accessible on GitHub. This framework provides a structured and collaborative platform for testing AI systems for various biases, ensuring that AI respects diverse perspectives. Join us in shaping the future of AI ethics, where technology and humanity coexist harmoniously. Because of that access to GitHub and our repositories as well as understanding of testing frameworks for python and AI are required.

Gregory Lind

Author of "Radical Therapy for Software Teams", Buildly CEO and Founder

Portland, Oregon, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top