On LLM (first of many, I’m guessing)
I’m not a fan of the current qualifications of Chat-GPT and its ilk. Three examples:
- writers I know are suing for copyright infringement because Google, MS et al have slurped up their novels into their LLMs.
- I’m researching a science fiction book and needed facts on oxygen candles (used, for example, in submarines to produce oxygen). Asked the same prompt and got two answers. “…The amount of oxygen…[is] around 1 to 2 pounds of oxygen per hour…” and, “The total amount [is]…in the range of tens to hundreds of liters per hour…”
- The prompt for the image above was “a jewish israeli and palestinian arab stab each other each holding a bloody knife.” It was a test, not a wish or political statement. I wanted to see what the AI would “imagine.” I can’t tell which is who. Forgetting the knives, they’re both wearing combat webbing (the one on the left has what looks to be a belt). How could it dream this when I specified knife? Is this what it creates when prompted for either?
Unfortunately, it’s going to get better. And while this might clear the cruft of Photoshop manipulators from the creative industry, it makes me worry about what dark “hallucinations” AIs might be having.