The AI post

What is AI (for me)

Any technique where mechanised statistical inference is performed based on training data. Expert systems are AI, but not here and not now.

Further, generative AI (systems that produce output of the same form as the training set) are distinct from transformational AI (classifiers, translators, etc).

I’m talking about generative AI.

Is AI good

“AI” is vague. Copilot, as of 10th of June 2025, is virtually useless if the automated completion cannot be filled in with copy-paste from the surrounding text. Copilot could not generate remotely valid Verilog or Bluespec.

I’ve not tried agentic AI coding models.

Qwen 3 235B is a OK search model, that’s good (within Kagi) of finding “follow on” searches and then giving me the results I would have got after several rounds of refining. 4o mini is not good for this, as the results it produces are not reflective.

Photoshop’s generative fill is slightly better, but much much slower than the prior content aware fill and I don’t use it for that reason.

Mistral Medium was able to parrot the Stanford encyclopedia of philosophy fairly well, but was no better than reading the source (and prone to waffle).

I find it hard to trust AI output, as I value the social relation of someone writing something and building an argument and narrative for themselves for me to understand. The AI model lacks this, as I do not care about a hard drive’s interiority, and so I lack commitment to the output.

is AI bad

Inference for personal use is basically free. Inference for the booster case, where billions of AI agents bounce off each other in an endless fountain of buggy oAuth implementations, false CVE reports on those impls, misjudged reviews, and fabricated newspaper articles, can and most likely will consume every marginal gigawatt of spare energy generating capacity.

This power can and should be used for building durable resources that humans need – nuclear, solar and wind power, rail transport, housing, food and water supply chains. All of which are sensitive to the cost of power, and to the allocation of human creativity.

This is not to say inference should be banned, but that should not be permitted to compete with the resources humans need to survive.

Power for training appears to have peaked. It is not possible to produce meaningfully more GPUs; there is not enough training data to support more training runtime; inference-time scaling was a mirage that fails to meaningfully enhance a model’s ability to construct coherent reasoning.

It’s rude as hell that AI providers want to mediate interpersonal relationships. From Facebook’s AI friends, to OpenAI’s redrafting of my blog with the most minimal of attributions under a read more, to the apparent collapse of the prevailing test-based educational system, and even the deluge of slop graphics used in commercial slides (thankfully well on it’s way to being shunned), generative AI is intended to be deployed as a filter to further atomise people under the eyes of the administrative state.

Atomise or vaporise, with Israel’s use of automated data processing as part of its justification production pipeline for the deaths of 55 thousand Gazan’s (and counting). It does appear that being able to relate every council sanitation workers connection to the apparatus of the Gaza state has helped Israel maintain and freely choose the rate and targeting of airstrikes, a factor that previously lead to a reduction in bombing intensity.

are LLM's worth it.

I think the world would have been better without generative AI, but I don’t see a case to ban the use or training of models. I do see a case to regulate for open social systems and a requirement for administrative actions to be either personally justified or applied through processes defined by human produced text. Equally, I demand a means by which you can opt-out of relating information to a LLM company. As many now operate evasive scrapers, using tools like Anubis is a tax on humans to pay for LLM search engines need to “leach off the face of humanity”.

This rant produced without the aid of LLMs (but with spellcheck).