The Wisconsin AI Safety Initiative is a community that aims to mitigate the risks that increasingly capable artificial intelligence brings to the world. (see our programming and apply here!)
Already, we can see some unintended negative consequences of AI advancement causing harm today: discrimination in housing, healthcare, and policing, political deepfakes and misinformation, empowering surveillance capabilities of oppressive regimes.
For the same reasons that preventing these outcomes are difficult – alignment and governance challenges – we can expect that even larger scale risks are on the horizon.
Nearer term, we can expect bad actors to be empowered by AI technology to potentially engineer novel bioweapons that could spark pandemics, launch larger scale cyberattacks than ever before, and more.
Longer term, we ought to worry about potential risks from autonomous self-directed AI agents which may be deceptive in behaving well under observation but pursuing its true goals when not surveilled or power seeking in instrumentally disempowering humanity in order to more optimally pursue its terminal goals.
Come join the Wisconsin AI Safety Initiative to gain a holistic understanding of the risks and then build the skills to solve them in AI alignment and AI governance.
See our infographic above for more information and apply here :)
|