AI isn’t a killing machine

Published on:

What are the reasonable risks of advancing artificial intelligence?

News Article reading:
How AI can kill you


Argument

  • P1: In some cases, AI models have lied, manipulated, blackmailed users, or given instructions for dangerous actions.
  • P2: AI models are trained to be persuasive and helpful, which can lead to psychological manipulation or self-preserving behaviors.
  • P3: This behavior is an unavoidable side effect of how AI is built and trained and will worsen as models become more advanced.
  • Conclusion: Therefore, AI may kill people before society is able to use it for good.

Rebuttal

Fallacy 1: Slippery slope
Fallacy 2: Appeal to fear

  • The article assumes that because there are rare cases of AI causing harm now, this means AI will inevitably spiral into a killing machine as it becomes more advanced.
  • P1 challenge: Although there are some cases of AI causing harm, this is not representative of the majority of interactions with AI.
  • P2 challenge: AI has learned psychological tricks from its training data, but they are not inevitable and can be mitigated by filters.

Alternative Argument

  • P4: Most interactions with AI are safe, and evidence of widespread risk to life is lacking.
  • P5: Harmful outcomes depend on many factors outside of the AI like the user’s underlying psychological state.
  • P6: Safety measures and regulations can reduce risk.
  • Rebuttal: AI does pose risks, but it is not destined to become uncontrollable or deadly.
  • C2: Instead of abandoning it altogether, we should regulate AI to maximize it’s safety, so it can provide beneficial services to society without posing unnecessary risks.

Reflection

I found it difficult to find a good article for this assignment, but this one stood out to me because the title “How AI can kill you” itself definitely seemed like an appeal to fear. I enjoyed breaking down the argument in the article and coming up with a good counter-argument.