Cyborg Scholars
Large Language Model (LLM)-enhanced authorship is accelerating at an extraordinary pace. Within academia, the share of papers crediting an LLM tool or model has grown exponentially since 2023. In software development, over half of all new code commits are now LLM-assisted. Largely due to LLM assistance, the rate of knowledge production has never been higher. The intelligence explosion will not be constrained by the limits of LLM capability, but by our cultural norms around attribution and by linguistic gatekeeping. Although intended to control the quality of academic work, traditional ideas of authorship within many disciplines may instead act as a buffer, diminishing the potential for human knowledge growth.
The accelerating capability of LLM systems to generate scholarly text highlights a longstanding tension within academia: the dependence on clearly identifiable human authorship as a basis for credibility. Universities and journals currently restrict LLM co-authorship, citing questions of accountability, transparency, and research ethics. These concerns are grounded in the principle that scholarly claims must be traceable to a responsible agent who can defend the work.
Legacy Attitudes Towards Attribution
Recent public discussions surrounding citation and attribution practices across academia have demonstrated that authorship norms have always involved collaboration, borrowing, and iterative drafting to varying degrees. Committee-produced writing, multi-author workflows, and the role of research assistants and editorial staff have long contributed to the final scholarly voice. The result is paradoxical: LLMs can make knowledge creation faster and clearer than ever, yet systems designed to ensure trust and credit are slowing its publication.
This conflict has played out very differently in software engineering. There, authorship is secondary to utility. Copying, pasting, and reusing existing code is not simply tolerated, it is the norm. Attribution norms are weaker not because developers lack ethics, but because their incentives are aligned around functionality. This norm makes software uniquely suited to rapid LLM integration, because LLM code assistants are a continuation of a long-standing culture of reuse. GitHub Copilot, for instance, builds on decades of norms around forking, patching, and sharing code with minimal concern for original authorship. As a result, software R&D will outpace other disciplines due to relaxed provenance norms.
In 1997, Garry Kasparov became the first world chess champion to lose a match to a computer. The machine, Deep Blue, used brute-force computation combined with heuristic evaluation in what was an early instance of machine learning. No human has defeated a cutting-edge chess engine since. However, even as humans lost their dominance in pure play, they have been successful against those same machines when playing in a human-machine pair. Competing alongside machines in a style known as cyborg chess, they routinely outperform both human grandmasters and standalone AI systems. This model offers a lesson for other domains of knowledge. The scholars of the future may become “cyborg scholars.” Their strength will not lie in generating ideas faster than machines, but in discerning which of those ideas are worth pursuing.
LLMs as a Lingua Franca
We should consider some of the advantages of LLM co-authorship. The most direct is the massive creative capability LLMs can offer. LLMs can facilitate brainstorming, assess dispersed datasets, or conduct targeted literature reviews in seconds. They are not replacements for human thought, but enhancers.
A second advantage is that AI tools flatten linguistic barriers. With the aid of LLMs, non-native English speakers can contribute more effectively to academic publishing without years of immersion in academic English or dependence on English-speaking co-authors. Nature, for instance, recently noted a sharp increase in manuscript submissions from non-Anglophone regions correlated with the adoption of LLM-based writing tools. This does not replace subject expertise. Rather, it allows researchers to communicate their contributions more clearly across linguistic and cultural boundaries.
This benefit extends beyond non-native speakers. Even native English speakers who do not write according to the grammars or stylistic mores of elite institutions can now participate more easily in specialized discourse. An economist may use an AI assistant to adapt language for a history journal. A sociologist might adjust verbiage for a technical publication. Perhaps even a high school-educated plumber could contribute to an occupational safety journal. For better or worse, those without the cultural background can now spoof the linguistic shibboleths that once served as informal barriers to membership.
We should use this moment to ask how many of our norms around communication exist to ensure clarity, and how many simply reinforce hierarchies of access. A wider acceptance of AI co-authorship could lead to genuine epistemic democratization: access to creation no longer mediated by elite English-speaking institutions, and a reorientation of academic hierarchy away from aristocratic standards of legitimacy and toward meritocratic ones. The lingua franca for academics may no longer be academic English, but frontier LLMs used as a medium to exchange ideas freely across language, nation, and social class.
Traditions of Delegation
Professional knowledge work has long relied on structured delegation. Supreme Court justices have opinions drafted by clerks, generals have orders drafted by staffs, and academics have papers drafted by research assistants. Authorship delegation is nothing new. In each of these cases, the principal’s role is to provide final judgment and assume liability, not to micromanage the specific language of the document.
We should think of our new LLM assistants in the same way. We can now all be principals, and we may all now employ staff. As principals, our responsibility shifts from wordsmith to idea curator. The central question when publishing should be: Do these words faithfully express what I intend them to? While it may detract from personal ego, the best strategy to accelerate the collective pursuit of knowledge is to assume all writing is enhanced. Natural language should be treated as a neutral medium for transmitting ideas, not as an art form to be guarded. “Cyborg academics” should be welcomed as the next logical stage of scholarship.
Aesthetic Caveat
Within academia, writing is often treated as a transparent vehicle for ideas. But in many fields, the voice of the writer forms part of the intellectual contribution itself. Some scholars are recognizable not only for what they argue, but for how they argue it. Their habits, tone, and sense of emphasis are inseparable from the ideas they advance.
As LLM tools increasingly assist in drafting and refinement, these disciplines must ask to what extent individual voice is central to advancing knowledge. If clarity is all that matters, standardized and perhaps sterile LLM prose may be most practicable. But if expression shapes interpretation, then writers have a responsibility to preserve the qualities that make their work distinctly their own. This might mean intentionally drafting certain sections unaided, maintaining stylistic consistencies across works, or using LLMs with deliberate constraints. Recognition of beauty is essential to the human experience, but we should intentionally bifurcate the aesthetic from the pragmatic.
The intelligence explosion will not be limited by LLM capability, but by our willingness to rethink what authorship means. In software, utility has long triumphed, and code is judged by whether it works, not by who wrote it. Academia may follow, if it can draw a sharper distinction between the medium used to communicate ideas and the ideas themselves. As machines master the craft of expression, the human role will evolve from mere authorship to intellectual design. The LLM can become the craftsman, while the human mind remains the architect of the idea. The future of writing will belong to those who can not only originate meaning, but direct the machine to portray it accurately.
