Guardians of Alignment: a new video series on AI Alignment

I've started a new video series where I interview people researching various and novel approaches to AI alignment: https://www.youtube.com/playlist?list=PLjMBOCcv2MMWKw409mp1wXzegbYHRw6Oh. In this series, I skip the endless debate about whether future AI is dangerous and look to solutions, casting a wide net to give voice to ideas beyond the approaches used by the major labs. My hope is to make both the challenges and strategies surrounding the alignment problem more comprehensible to a wider audience, facilitate communication between researchers who value safety over unchecked power, and hopefully spark some new ideas.

Check out the series if this interests you, subscribe to know when new episodes come out, and share what you think is helpful. I am also open to suggestions regarding people to interview, as well as feedback on the channel itself.
Was this page helpful?