A core aspect of the theory-building model of software development is code that developers don’t understand is a liability. It means your mental model of the software is inaccurate which will lead you to create bugs as you modify it or add other components that interact with pieces you don’t understand.
Language model tools for software development are specifically designed to create large volumes of code that the programmer doesn’t understand. They are liability engines for all but the most experienced developer. You can’t solve this problem by having the “AI” understand the codebase and how its various components interact with each other because a language model isn’t a mind. It can’t have a mental model of anything. It only works through correlation.