I know it sounds like a clickbait, but it's not, stick around. LLMs now discover real‑world zero‑days and they do it with brute‑force patience, not superhuman IQ. Today, a swarm of lightweight LLM agents can out‑grind any human. If you think language models are
LLMs became good at hacking by accident