WeSearch

LLMs Are Not a Higher Level of Abstraction

·3 min read · 0 reactions · 0 comments · 4 views
#ai#programming#abstraction#llms#software development#Lelanthran#Alan Perlis
⚡ TL;DR · AI summary

The article argues that large language models (LLMs) do not represent a higher level of abstraction in programming like previous transitions from binary to high-level languages. Unlike deterministic systems where specific inputs produce specific outputs, LLMs generate probabilistic results, introducing unpredictability and potential unintended artifacts. The author emphasizes the need for programmer self-awareness and cautions against treating LLM-generated code as equivalent to traditionally abstracted programming languages.

Key facts
Original article
Hacker News: Newest
Read full at Hacker News: Newest →
Opening excerpt (first ~120 words) tap to expand

LLMs Are Not a Higher Level of Abstraction " A picture is worth 10K words - but only those to describe the picture. Hardly any sets of 10K words can be adequately described with pictures." -- Alan Perlis previous:C and Undefined Behavior Contents Posted by Lelanthran 2026-04-27 The Myth I am seeing the claim everywhere online that LLMs are a higher level of abstraction. If you claim that you haven’t seen this claim then you had better stop reading now - this blog post is not for you.1 Specifically, I am seeing the claim that LLMs are the net step in the abstractions we had, going from programming in binary to programming in assembly to programming in C to programming in Python. Now, I am told, the programming in LLMs is the next abstraction.

Excerpt limited to ~120 words for fair-use compliance. The full article is at Hacker News: Newest.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments