More on AI

I’ve been thinking about LLMs like ChatGPT, which I’m experimenting with right now.

After a hazy recall of a dream last night, it occurred to me, in my experience, LLMs are dreamlike in their oddities and quirks. They put things in that don’t make sense, like in a dream where something out of place is right there, being believed by your sleeping mind.

They’re also stubborn, and seem to have a hard time admitting error.

I’ve definitely been rethinking using my ainimal images for publication or profit. It’s not something I want to do. I expect artificially generated images will take work away from skilled human artists. This is a valid argument. I understand it’s my personal view and also understand others’ use of such images can be justified.

So why is “dreamlike” a good descriptor? It’s the suspension of disbelief LLMs share with the common experience of dreaming. It’s the apparent assumption by the LLM that everything’s fine and correct, when a human can detect it isn’t.

Another thought about the danger of LLMs is simply, we’ve been damaged by technology in many different ways, from the dawn of technology, which is a very long time ago. So the damage being caused by LLMs is just more of the same type of thing. Getting upset by this is a common human experience. It hasn’t been the end of the world up to now. I expect LLMs won’t be the end of the world either.

AI likes the transformer shape, so it puts in extras. And I specified “sparks at the very top of the pole”. It kept putting them in the middle. AI doesn’tunderstand power transmission, at all.