Hi there. My name is Matei Stanca and I make websites and stuff.
Featured project
Neurocracy
Neurocracy is a project conceived by Joannes Truyens, Younès Rabii, and myself, featuring visual direction and illustrations by Alice Duke and Ollie Tarbuck in addition to a range of futurist stories from contributing writers Leigh Alexander, Io Black, Holly Nielsen, Malka Older, Edward Smith, Axel Hassen Taiari, and Yudhanjaya Wijeratne. It aims to be equal parts interactive fiction and cautionary tale about the intersection of surveillance capitalism, big data, and authoritarianism.
Latest snippets
I wasted a day on CSS selector performance to make a website load 2ms faster
For a few minutes, I felt like Indiana Jones staring at the holy grail. 230ms of savings from some CSS selector changes. This felt significant. And yet, the niggling phrase of “CSS selector efficiency is not something to worry about in 2024” kept ringing through my head.
I ran the before/after through WebPageTest and to my surprise (and lack of surprise), nothing significant had changed.
Programmers' mental models keep software alive
What keeps the software alive are the programmers who have an accurate mental model (theory) of how it is built and works. That mental model can only be learned by having worked on the project while it grew or by working alongside somebody who did, who can help you absorb the theory. Replace enough of the programmers, and their mental models become disconnected from the reality of the code, and the code dies. That dead code can only be replaced by new code that has been ‘grown’ by the current programmers.
Language models don't "know" anything
Training data are construction materials for a language models. A language model can never be inspired. It is itself a cultural artefact derived from other cultural artefacts.
The machine learning process is loosely based on decades-old grossly simplified models of how brains work.
[…]
It’s important to remember this so that we don’t fall for marketing claims that constantly imply that these tools are fully functioning assistants.
[…]
Who was the first man on the moon?
[…]
What you’re likely to get back from that prompt would be something like:
On July 20, 1969, Neil Armstrong became the first human to step on the moon.
This is NASA’s own phrasing. Most answers on the web are likely to be variations on this, so the answer from a language model is likely to be so too.
[…]
The prompt we provided is strongly associated in the training data set with other sentences that are all variations of NASA’s phrasing of the answer. The model won’t answer with just “Neil Armstrong” because it isn’t actually answering the question, it’s responding with the text that correlates with the question. It doesn’t “know” anything.