Ok, no I'm not. I'm pretty sure I'm a human being. At least based on what I can understand.
But as I keep interacting with LLMs (mostly for research and development tasks), and as I learn more about how they work, I sometimes pause and wonder if there's a chance that I (and this can be extended to other people, but I will speak from my POV) can be some sort of LLM.
The idea of humans thinking about the latest invention as the way of how brains work is not new. I stumbled upon this idea in the book called "The Idea of the Brain", which mentions how at one point, the steam engine was seen as the brain. Before that, it was clocks, and before clocks, it was hydraulics. We keep doing this with every new thing we build. So we might tend to reflect ourselves in what we build, and then, what we built, helps us explain how we are made, even if that's not accurate.
So, here we are again, with AI, but specifically large language models. I see parallels in how they work and act, and how I perceive how I work and act.
I learn and forget most of what I learn, however, the information stayed
LLMs and other ML models are trained on massive amounts of data. Maybe, at this point they are trained on all human knowledge and are already a repository of every single thing we have documented in any shape or form. However, it is not easy, and in many cases impossible, to track back and ask an LLM to pinpoint where their ideas come from.
An LLM knows about economy, computer science, and can write an article on that. But won't be able to say where each idea is coming from. Might reference some common books or articles, but the answers it produces are not 1:1 to what the book described. It is a mix of that, the context and other references it consumed, how, and when.
So, I'm like that. I try to read a lot. In the past five years, I've read and tracked over 300 books. Maybe more.
But I don't remember all of it. Some I might not even recognize at all. And it has happened. I've been to a bookstore or library, pick a book and start reading, then realize I'm having a deja vu, go to my list of read books, and see I've already read it!
However, some of that knowledge is there. At work, at personal situations, in everything I do, I'm referencing my training: books, conversations, experiences, video games, everything. But I can't pinpoint to what makes me make a decision. I keep frameworks to help me, but I'm sure there's more that is either forgotten, or stored and retrieved, but I don't know how or where it is.
I hallucinate
This is a big thing with LLMs and other models, including vision models. We say: "they hallucinate". But I do hallucinate too. And all the time! The memories I reference, the knowledge, the data, it is all a hallucination. In the sense that it is based on an internal narrative that I'm building on the fly. I have memories of "my first day at work", and have told the story multiple times. But I'm sure some details are not the same way as I experienced back then.
LLMs, and myself, use a technique to retrieve that data and make it more accurate, to search for it. Maybe using some sort of vector search (how LLMs usually do) and in my case the process is something like:
"Ah, I remember reading something related to game feel, also I played a game that was known for its great gameplay, and there was an online presentation that mentioned that too".
Then, I go through my list of notes, read books, playlist, and if I find the stuff, I re-read, re-watch, re-play, or read others' works on that. And "refresh" my knowledge.
But still, it is hallucinated, because if I read an author's article on game feel, I'll understand maybe different from what the author intended. Same happens with LLMs.
Of course, the mechanisms are different; LLMs predict the next token, and I'm reconstructing memories with emotion and context. But the result feels similar: confident output that isn't always accurate.
I don't think it is bad. It is the way we work, and how we modeled these models to work.
In AI-generated images, in many cases horrible, we see a lot of artifacts from hallucination, specifically things in the backgrounds. But as I'm typing this, the only clear stuff I see are the words next to my cursor. The rest is blurry, and I'm sure it is hallucinated by my brain. Not perfectly rendered based on what my "eyes" perceive.
I can be tricked
Yes, if you are nice to me. If I care about what you are working on, if there's a bond between us. I will act differently from how I would in a different circumstance.
I do try to have a compass, on how to behave, but I can be tricked! The same way an LLM, with proper prompting, can be tricked into acting differently.
I guess, in human terms, that's the way of saying: treat others the way you would like to be treated.
That's how I can be prompted, by tricking me. That's how we can get motivated, and motivate others, how influence works. Providing enough prompting to guide in a direction, and then, let the AI...sorry, human continue.
Ok, I'm not convinced I'm an LLM, but it is fun thinking I am. And if nothing else, it helps me be more patient with them when they get things wrong.