The devices humanity built for centuries reduced the effort required to do tasks otherwise difficult or impossible for our human bodies. Bows allowed humans to propel objects further, with more accuracy and strength, than humans could throw rocks. The interaction is simple: hold bow with one hand, pull string with the other. The penny farthing meant humans could travel further and faster. Bend legs to drive the pedals, turn handlebars to guide direction. Increasingly complicated bikes provide the same interface to the user, but the mechanisms that empower the device are more complicated.
Devices for computation moved from purely analogue mechanical interfaces to digital interfaces. Early languages that saw actual use had few abstractions. Assembly language instructions would map common mechanical operations into text based commands. These human languages exist to simplify humanity’s use of mechanical devices. In order for computers to be used broadly, it had to be understandable and intuitive to a human. Entering pieces of paper with holes cut out is harder to understand than human language.
Early mechanical computers involved humans turning algorithmic ideas for the mechanical device into lengthy cutouts of paper with holes, providing instructions for the mechanical device. Given some input, perform these operations on the data, and present the output. Modern computers aren't much different, but the level of complexity has increased greatly. Both in terms of what can be achieved, and how it is achieved. An early computer could eventually be understood by someone with no prior knowledge, and infinite time. Modern computers are incomprehensible.
Human language is the best we have to express ourselves. We are able to dynamically convey so much information on an infinite number of topics. Mechanical devices need specifics. While a human brain may be able to interpret the meaning of a statement, a computer does not have that inference. The dream programming language is one that allows a human to express a concept accurately, precisely, but consistently1 .
Accuracy and precision are vital to producing high performance, correct, and safe code. Consider the optimizations needed to make modern websites possible. V8 works with imprecise code (due to types and language features), turned into a mostly high performance runtime. Alternatively, Rust is a fast language, aided by the precision needed in both the data types needed, and the correctness of the algorithms used. If an algorithm must cover the edge case where an otherwise homogenus (all elements of the same type, e.g a list of numbers) might be a string (i.e mixed types), then the algorithm is usually slower.
Consistency is important for security and data. Not all data needs to be consistent, but data related to user information, session management, or memory addresses does.
LLMs able to produce something similar to the dream programming language. They can answer questions it wasn’t directly built to answer. Humans are able to ask questions, in many human languages, and get answers from a computer. The output isn’t always consistent, nor accurate or correct. But the output is easy to obtain without needing to learn a new conceptual language. A common sentiment that non-tech people share when discussing technology with developers is that they don’t speak the tech “language”. With LLMs, anyone can speak the language, as it’s the human language they know and use daily.
Despite the improvements in accessibility, LLMs still need to produce code in a programming language. They could produce binary directly, but don’t. There’s a good reason for that - there is a lot of training data mapping between human languages and high level programming languages, but far fewer between human languages and binary.
High level programming languages allow developers to target multiple platforms. Code written in 2025 generally will be expected to run on at least 5 OSs (Linux, Windows, Mac, iOS, Android), and two main instruction sets (x86_64, ARM). High level programming languages allow developers to target these platforms while keeping the majority of logic the same. Binary produced for a specific instruction set on a specific operating system does not have the same freedom. Therefore even if there was a mapping between human languages and binary, it is unlikely to be as adaptable as languages with a methodological compiler.
Eventually, over time, we should expect LLMs to become programming languages. In order to produce fast, consistent, safe, and portable code, an intermediary representation of the logical concepts is probably needed. I don’t think it will be text based. I think it will be built upon abstract trees representing the logic, which may be displayed to users either as text, diagrams, or some other format. A lot of LLM-powered tooling is visual. A conversation to produce a product is powerful. Human engineers do the same, turning business requirements into programming language code.
Tools like Figma haven’t replaced coding, despite the conversion between design and code being time consuming for relatively small amounts of logistical code. It is unlikely that LLMs will replace coding. Traditional programming languages are unlikely to vanish. However, there is a subset of code written today that does not require the performance or accuracy associated with programming languages.
I don’t think that programmers need to jump onboard with AI code assistants. It may be useful in some cases, to compliment a programmer’s other tools. I will keep writing my code by hand. They do not save me the time I need to think about problems. And I’ll keep working on programming languages.
The future most likely involves LLMs specifically crafted for programming, which is fascinating from a programming language design perspective. But no, AI is not replacing programming languages yet.
Social-media preview credit: Jud McCranie, via Wikipedia
I originally had “concisely” in the dream programming language. Conciseness aids in accuracy and consistency. However, in a world where human languages are used to program (i.e via LLM-generated code), conciseness is less important. If the parser (LLM) is able to understand what the intended functionality is, then the conciseness matters less.