Friday, January 10, 2020

The Actually Terrible AIs You Didn't Know You Needed

This post topic is brought to you by 7 A.M. Sarah, who apparently channels her inner Tumblr weird-blogger when she's tired. Have fun. 

What's a sci-fi story without a surprisingly human (or inhumane) AI? Ever since someone realized that you could use ones and zeroes and code to make computers seem like they can think and reason and make decisions and develop consciences, we've been sticking them in narratives left and right. It doesn't matter if it's strict sci-fi or sci-fi adjacent stories (like superhero narratives); a sufficiently cool AI makes everything better. Of course, a competent AI may make things too easy for our intrepid heroes. The solution? No, you don't make the AI the villain. You create an AI in the spirit of Wheatley and the useless box*, one that's mostly useless, yet more or less lovable. And if you have trouble coming up with one, never fear. I have a helpful list of five terrible AI ideas to get you started.


Actually Terrible AIs

  1. Sand(wich)Net. Originally intended for gathering and analyzing information on massive numbers of individuals for purposes of threat detection and defense, this AI, for reasons unknown, instead collected a huge database of peoples' favorite sandwiches. Despite numerous attempts to train it for other purposes, it always returns to sandwich data. Occasionally, however, its defense protocols will be triggered, at which point it will stop at nothing to keep the detected threat from getting their preferred sandwich type.
  2. InVisionary. This android was developed as a prototype of a new "race" that would live and work alongside humankind. Due to a mixup in programming, however, it communicates exclusively in motivational speaker quotes and bad web design advice. The project was mostly abandoned after this failure, though the android has developed a small internet fandom, mostly composed of people who believe that the android's shared quotes online hide a secret message.
  3. Future Explorer. This AI is intended to deliver accurate-as-possible predictions of future events of almost any type. And it does! It's amazingly accurate, in fact! The problem, unfortunately, is that it processes and loads so slowly that every prediction appears at least 24 hours after it would be helpful and/or relevant.
  4. CuriAIsity. Created by the small, optimistic portion of the InVisionairy team that didn't abandon the project, this android was intended to serve the same purpose as the original. However, when this android was turned on and its systems connected to the internet, the prevalence of cat pictures and videos on the web led the android to recognize cats and kittens, not humans, as the true masters of society. The android has since dedicated the rest of its existence to helping, serving, and caring for all of catkind that it encounters. Its makers attempted to use it as the foundation of a cat daycare and boarding center, but the center closed after the android refused to give up the cats it was entrusted with.
  5. CARL (Chronological Authority on Relevant Lore). CARL was designed to be a history-teaching tool that would allow students to "interact" with various historical figures, both famous and not. It worked very well until its chronologically-bound linguistic terms database got scrambled. It was quickly retired, but not before convincing a significant number of middle-schoolers of several linguistic improbabilities, notably the idea that George Washington and his contemporaries frequently used the term "Groovy."
Your turn! Share your terrible and useless AIs in the comments; I want to hear what you can come up with! Or just tell me which of these you'd most like to read or write about.
Thanks for reading!
-Sarah (Leilani Sunblade) 

*Yes, I know, it's not actually an AI, but it got the point across, didn't it?

5 comments:

  1. Ahahaha! xDDD This is too funny. I would totally read a story with CARL as a sidekick. Perhaps he would take on different personalities of people from all over history & different eras. xD

    ReplyDelete
    Replies
    1. Oh, absolutely. Probably the personality of the person he believed to be most helpful for the current situation . . . which would probably be the wrong one, but he'd try. xD Glad you enjoyed the post!

      Delete
  2. I love these. Any one of them would make an excellent character in a sci-fi novel!

    How about this one...

    ORACL (Odds & Realities Articulated with Calculated Likelihoods), an AI designed to evaluate the success or outcome of a hypothetical mission or effort. This AI is extremely adept at its purpose, however, it is incapable of communicating these odds in anything other than obscure metaphors. For example, "The odds of your success are approximately tacos to icecream in the summer."

    ReplyDelete
    Replies
    1. ...I was curious about how that metaphor would actually translate. Turns out the odds of tacos to ice cream in a year (couldn't find stats for the summer) is about 4.5 billion to 1.4 billion. So... odds of success is 4 to 1ish. XD

      Delete
    2. Thanks! ORACL sounds amazingly terrible; I would honestly love to see it in a book as well. xD (Also, props to you for translating the metaphor.)

      Delete

I'd love to hear your thoughts! But remember: it pays to be polite to dragons.