This interactive session explores how algorithmic bias shows up in the tools your clients use every day — and what coaches can do about it. Drawing on nearly a decade of equity leadership in complex institutions and an emerging specialization in AI governance, Evolve Benton breaks down how automated systems replicate systemic harm and gives coaches a practical equity lens and concrete tools to use in their practice immediately
Learning objectives
By the end of this session, participants will be able to:
- Identify at least two ways algorithmic bias shows up in tools their clients commonly use and explain the equity implications in plain language.
- Apply a simple audit framework to evaluate AI-generated recommendations before incorporating them into coaching conversations or client guidance.
- Introduce AI ethics as a coaching topic with clients, using equity-informed language that does not require a technical background.
Meet the Host
Evolve Benton, MA, MFA
Evolve Benton is an equity practitioner, speaker, and consultant helping organizations turn good intentions into measurable outcomes. With nearly a decade of leadership experience at UCSF institutions and prior work at UCLA, Evolve has spent their career inside large, complex systems building equity frameworks, leading cultural transformation, and making the invisible visible. Now at the forefront of AI ethics and equity, Evolve is among a small number of practitioners connecting the dots between artificial intelligence, algorithmic bias, and systemic harm.
Evolve Benton, MA, MFA
Evolve Benton is an equity practitioner, speaker, and consultant helping organizations turn good intentions into measurable outcomes. With nearly a decade of leadership experience at UCSF institutions and prior work at UCLA, Evolve has spent their career inside large, complex systems building equity frameworks, leading cultural transformation, and making the invisible visible. Now at the forefront of AI ethics and equity, Evolve is among a small number of practitioners connecting the dots between artificial intelligence, algorithmic bias, and systemic harm.
*By attending ICFGA events and programs in person or virtually, you hereby grant your voluntary consent that your likeness or image, captured by photo or video, or your voice captured by recording, may be used for promotional and archival purposes in print, video and online, or in any other format, without notice to you.