AI

Please stop externalizing your costs directly into my face

If you think [LLM] crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned, including expensive endpoints like git blame, every page of every git log, and every commit in every repo, and they do so using random User-Agents that overlap with end-users and come from tens of thousands of IP addresses – mostly residential, in unrelated subnets, each one making no more than one HTTP request over any time period we tried to measure – actively and maliciously adapting and blending in with end-user traffic and avoiding attempts to characterize their behavior or block their traffic.

We are experiencing dozens of brief outages per week, and I have to review our mitigations several times per day to keep that number from getting any higher. When I do have time to work on something else, often I have to drop it when all of our alarms go off because our current set of mitigations stopped working. Several high-priority tasks at SourceHut have been delayed weeks or even months because we keep being interrupted to deal with these bots, and many users have been negatively affected because our mitigations can’t always reliably distinguish users from bots.

All of my sysadmin friends are dealing with the same problems. I was asking one of them for feedback on a draft of this article and our discussion was interrupted to go deal with a new wave of LLM bots on their own server.

[…]

Please stop legitimizing LLMs or AI image generators or GitHub Copilot or any of this garbage. I am begging you to stop using them, stop talking about them, stop making new ones, just stop. If blasting CO2 into the air and ruining all of our freshwater and traumatizing cheap laborers and making every sysadmin you know miserable and ripping off code and books and art at scale and ruining our fucking democracy isn’t enough for you to leave this shit alone, what is?

Tags

Language models don't "know" anything

Training data are construction materials for a language models. A language model can never be inspired. It is itself a cultural artefact derived from other cultural artefacts.

The machine learning process is loosely based on decades-old grossly simplified models of how brains work.

[…]

It’s important to remember this so that we don’t fall for marketing claims that constantly imply that these tools are fully functioning assistants.

[…]

Who was the first man on the moon?

[…]

What you’re likely to get back from that prompt would be something like:

On July 20, 1969, Neil Armstrong became the first human to step on the moon.

This is NASA’s own phrasing. Most answers on the web are likely to be variations on this, so the answer from a language model is likely to be so too.

[…]

The prompt we provided is strongly associated in the training data set with other sentences that are all variations of NASA’s phrasing of the answer. The model won’t answer with just “Neil Armstrong” because it isn’t actually answering the question, it’s responding with the text that correlates with the question. It doesn’t “know” anything.

Why using language models for programming is a bad idea

A core aspect of the theory-building model of software development is code that developers don’t understand is a liability. It means your mental model of the software is inaccurate which will lead you to create bugs as you modify it or add other components that interact with pieces you don’t understand.

Language model tools for software development are specifically designed to create large volumes of code that the programmer doesn’t understand. They are liability engines for all but the most experienced developer. You can’t solve this problem by having the “AI” understand the codebase and how its various components interact with each other because a language model isn’t a mind. It can’t have a mental model of anything. It only works through correlation.