Robots and AIs are increasingly involved in every facet of human life: transportation, warfare, criminal justice, medicine and, of course, social media and communication. And while in some cases they are promised to be better: fairer, more data driven, less prone to emotion, they often mirror back to us our own moral blindspots and biases. Perhaps we need to design AI systems that are not only good at their jobs but are, in a sense, good people – good moral agents. How should we go about doing that? Do we want AI that learn from and mimic our messy moral reasoning? Or, do we want AI to be morally better than us? What would that look like, and would we ever be able to take moral guidance from a robot? This CHI salon brings an interdisciplinary set of scholars together to examine the practical and theoretical questions raised by the goal of pursuing more ethical AI. Laura Sizer (Hampshire and MHC philosophy), Lee Spector (Amherst computer science), Joseph Moore (Amherst philosophy), Heather Pon-Barry (MHC computer science), and Philip Thomas (UMass computer science).
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Mind the Bot? How Close are Chatbots to Having Minds (Like Ours)? (3/22/23) | 2 | 432 | April 7, 2023 | |
Artificially Intelligent, Naturally Fair? — Panel Discussion on Equitable Algorithms | 0 | 393 | October 18, 2023 | |
Back to Ethics (again!) | UNESCO (4/21/23) | 0 | 531 | May 4, 2023 | |
ChatGPT in Education: Boon, Bane, and Beyond | 0 | 404 | March 4, 2023 | |
Move over, Aristotle: can a bot solve moral philosophy? By Poppy Noor (The Guardian, Nov 2, 2021) | 0 | 324 | November 2, 2021 |