We’ve all heard of the trolley problem: can AI be programmed to make moral decisions? Should it be programmed to do so? And how do we feel about delegating moral decision-making to machines? In this talk I will look at these issues and beyond: who should take responsibility when something goes wrong? What about if we don’t know what’s in the “black box”? Can we really delegate moral responsibility to machines? I will also look at ways of ensuring social and ethical acceptability of the AI we make, to mitigate societal fear of intelligent machines. Don’t expect any easy answers in this talk – but you will leave with some strategies to help you think these problems through and make your own (responsible) decision.