Image showing AI and ops: fiction vs reality

What Murderbot tells us about AI

Note: I will refer to both the books and the Apple TV show, both of which I recommend. There are minor spoilers below. If you haven’t read or watched them yet and intend to, go do that first (at least through episode 6), then come back and read this.

As an avid reader, I flew through The Murderbot Diaries series by Martha Wells in preparation for the show’s premiere. I’ve also been reading a lot about AI for work. The combination of those two has led me to spend a lot of time thinking about how AIs are represented in fiction and how that’s relevant to what’s happening with AI today. If I were still writing academic papers on popular culture, I’d have a gold mine of material. As a replacement, this blog.

Me trying to train a chatbot like it’s a new teammate.

1. AI is really not like humans.

Fiction: Murderbot is a SecUnit (Security Unit) created to defend humans, often from themselves and each other, against risk. It really enjoys (or if that’s too anthropomorphic, gets satisfaction from) flexing those skills, as when (in the show), it defends its assigned humans by killing Leebeebee. But that’s largely because it’s fulfilling its true purpose.

Truth: AI has no true purpose, nor does it derive satisfaction. It also lacks the human ability to learn over time. Unlike a new employee, AI isn’t going to get more and more context over time and start to apply that context in more situations (yet). AI still only retains information for that session, then it starts anew like an amnesiac. That’s not a characteristic we’d want in a human employee.

2. But humans like to think it is.

Fiction: The Preservation Alliance team is completely shocked when, without discussion, without warning, Murderbot kills Leebeebee. As Murderbot narrates, the humans thought it was becoming more like them. Their experience told them what to expect from their SecUnit (protecting them from danger), but did not expect it to act without discussion and debate first (which is what they would have done). Murderbot wasn’t becoming more like them, it was doing what it was programmed to do: reduce or remove risk to its assigned humans.

Truth: People, my dental hygienist included, use AI to get answers online and they assume that those answers are factually correct. While we were talking recently, she was surprised to find out that AI is designed to sound convincingly human, not to be accurate. Her expectations were that of searching a reference book or a knowledgeable person: she thought AI was more like her. It’s not (yet).

3. There are different types of AI for different uses.

Fiction: In The Murderbot Diaries, there are different types of units for different purposes. In the books, Murderbot explains that you would not send a CombatUnit into a situation where a SecUnit is the more appropriate tool. Despite Leebeebee’s attempt to kiss Murderbot as it leaves for a mission, Murderbot is unresponsive: it lacks, it narrates later, both genitalia and endorphins. It’s not a ComfortUnit and it’s just not programmed (or built) that way.

Assigning an LLM to do a machine learning job.

Truth: Some AI initiatives are failing because the wrong type of AI is being applied. It’s not going to work to use an LLM for something better suited to machine learning. You still need the right tool for the job. AI is being used to assist with coding, for writing, for summarizing and these different uses all have different requirements. This has given rise to such sites as https://theresanaiforthat.com/, where you can find and implement the right tool for the job.

4. They work from the inputs they receive.

Fiction: Having emancipated itself from its governor module, Murderbot has discovered streaming video. And its preference is to watch its soap operas, most specifically The Rise and Fall of Sanctuary Moon (hilariously sent up as a Star Trek spoof in the show, including John Cho as the captain). When confronted with a crisis, Murderbot uses dialog directly from some of the shows it’s watched. It’s so committed to the script (the input it received) rather than the context that, in a tight spot, it initially uses the exact line from Sanctuary Moon, referencing those characters, rather than using the names of the individuals with whom it is contracted. Its actions are defined by its inputs.

When your AI model is quoting its training data mid-crisis.

Truth: It’s well noted that AI can have biases. Amazon had an issue with this when the data set they were analyzing for an experimental HR model was based on a primarily male candidate data set because the prior 10 years in tech were dominated by male candidates. As a result, the AI outputs discarded identifiably female candidates as unsuitable. The project was cancelled.

5. They are elusive and will avoid things they don’t “want”.

Fiction: Murderbot hates eye contact; it makes it uncomfortable and anxious. As a result, it will only make eye contact when ordered to do so. Murderbot will, at times, appear to be looking at someone or something, only to narrate that it’s actually not looking out of the eyes on its body, but using one of the security cameras of a ship or station instead so it can come from an angle. It can be elusive.

Truth: AI can also be elusive. It will provide responses that sound legitimate, but in fact have no basis in fact. While it’s standard to use the term “hallucination” for this, there are arguments that it is, in fact, actually bullshit.

6. They reflect the flaws of humans who program them.

Fiction: Murderbot is, in its own words, anxious and struggles to deal with emotions. In that, it’s like its human creators.

Truth: AI does what it’s coded to do. As previously mentioned, the current AI models are meant to sound like they could plausibly be human. If they were coded for accuracy, you might get entirely different results. For example, you might get fewer results or more cases where an AI engine would be unable to complete a task, but you might have a much higher rate of accuracy and not have to worry if the AI you used to help write a court brief cited fictitious historical cases.

7. They can be corrupted and tricked.

Fiction: Murderbot has a data port at the base of its skull. Once a program or virus is introduced, it doesn’t have a choice about whether to follow that program or not.

Truth: AI models can be corrupted. If AI can (and does) make errors, AI is creating web content, and AI is using web content to create models, then there could be compounding effects once things start to go awry. This leaves aside any intentionally malicious infiltrators. It also leaves aside situations where the human provider of the input data intentionally plants traps that will trip up AI when it creates summaries for other humans to read.

One bad upload, and your friendly bot goes full rogue SecUnit.

8. They will eventually share information amongst themselves, without human intervention.

Fiction: Eventually, Murderbot gets to the point where it’s willing to share the code and method that it used to thwart its governor module with other SecUnits. And does so. What those other SecUnits do with that ability is not revealed.

Truth: We’re not at the point where AI is retaining information across sessions and sharing information across and within systems. LLMs, machine learning, and other models aren’t there yet, but it is the direction of things. And we simply don’t know what will happen when AI gets there.

9. It’s in the future.

Fiction: Murderbot is set in the future. Humans have left an Earth of questionable viability behind and live on a variety of planets with a variety of political and belief systems.

Truth: AI is widely available. You can use it for simple things, but it’s not going to steal your job (yet). But it’s also going to make it easier to do some aspects of your job and it’s going to help consolidate information – you’re just going to need to check the outputs. AI can produce outputs at a tremendous rate, but output checking is the limiting factor. If AI is meant to increase efficiency, but only people can do that checking, there are limits on what can be done with AI. When AI will be able to move beyond those limits is unclear, but people are certainly thinking about it.

Note: No AI was used in the writing of this blog. But stay tuned, my colleague, Dominic Freschi

Want to explore the real-world side a bit more with AI?

Fictional AIs like Murderbot make for great stories, but real-world tools work a little differently. If you’re working with AI, or just curious about how it operates today, it helps to start with the basics.

The AI prompt engineering handbook is a helpful guide to writing better prompts for data automation. It introduces the fundamentals of prompt engineering so you can give clearer instructions and get more consistent, usable output.

Download it and see what’s real and what’s still science fiction.