AI Guardrails: Navigating the Ethical Future of Technology


Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls “shared language for responsible development” without stifling innovation.

The conversation reveals profound insights about diversity in AI development teams. “If the teams building AI systems don’t represent those that the end results will serve, it’s not ethical,” Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.

Kerry’s expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.

What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organizations build AI systems that truly work for everyone, not just the privileged few.

Connect with Kerry on LinkedIn to continue this important conversation about making AI work for humanity rather than against it. Her perspective will transform how you think about responsible technology implementation in your organization.


More information

Kerry on LinkedIn

 

 

Scroll to Top