(Reading Notes) Arguments that Count

Regarding weapons research and computing research during the Cold War, the works of Nathan Ensmenger (The Computer Boys Take Over), Fred Turner (From Counterculture to Cyberculture), and of course, Rebecca Slayton’s Arguments that Count are indispensable contributions to the discussion.

This book is essentially answering the following question: why, in high-stakes national decision-making involving advanced technology, are the opinions of certain experts treated as authoritative (“counting”), while other, equally fatal warnings are long ignored? That is to say, how is the authority of knowledge (or opinion) constructed? Why is one group’s line of argument so effective that it can decisively influence the trajectory of technological development, while that of other technical experts cannot?

Through the case study of missile defense, this book finds that for decades, the arguments of physicists (e.g., prohibitive costs, violations of physical law) were consistently authoritative; whereas the arguments of computer specialists (e.g., software is too complex, untestable, and will inevitably fail) were dismissed as “pessimism,” leading to their opinions being disregarded.

The core argumentative model is built upon a comparison of the “disciplinary repertoires” (知识库) constructed by the fields of physics and software.

Professor Slayton argues that the physics knowledge repertoire was pre-existing and exceptionally authoritative. Physics entered the Cold War decision-making circle with a long-established and highly credible repertoire based on “natural laws,” mathematical quantification, and centuries of scientific practice. Therefore, when a physicist stated, “This is not physically feasible,” the argument inherently carried immense authority and was seen by policymakers as “objective fact.”

In contrast, the computing knowledge repertoire did not exist in the early 1950s, or at least not one that could rival physics. This is demonstrated by the SAGE failures being dismissed as “engineering management problems,” and by the opposition from computer experts during the ABM debate being rejected as “anecdotal” and “pessimistic.” This seems to stem from the software experts’ lack of an authoritative, recognized knowledge system through which to gain validation. Regardless, the computing field still played an incomparably vital role during the Cold War. Its components were indispensable parts of various weapons systems. Together, they constituted a massive Cold War system. This strongly echoes Paul Edwards’s argument regarding the political worldview of the Cold War. Large-scale, real-time systems became an embodiment of this “Closed World.” Both the physical debates and the computer systems became part of the Cold War mentality, and this mentality, in turn, spurred the development of both the physics and computing fields.

The Struggle for Discursive Power

The struggle for discursive power (话语的争夺权) over weapons systems is also reflected in Professor Slayton’s book; it is, in fact, one of its central themes. A shift occurred in the R&D dominance and the “interpretive power” (解释权力) over these systems, moving from physicists as the sole source of explanatory authority to software engineers becoming another critical source.

Initially, the perception by physicists of software’s “infinite flexibility” (无限灵活性) led to the devaluation of computing; it was relegated to trivial computational tasks performed by “women,” and even warnings of software unreliability were dismissed. All of this signifies that discursive power was held firmly by the physicists.

The SDI (Strategic Defense Initiative) debate in the 1980s brought the computing field back into the discursive arena. The argument for software unreliability became central to this era. The failure of the Patriot missile served as a key piece of evidence. Professor Slayton argues that the construction of “software engineering” was a key driver pushing software into the center of the discursive struggle. This process transformed software from a neglected, feminized, and “soft” (malleable) expression into a rational, model-driven, “scientific engineering.” Through this method, software engineering gained a seat at the table. The author thus presents the conclusion reached in the 1980s: that software is rigid, complex, and unreliable.

However, in reality, the success of the Apollo program, as well as subsequent large-scale software projects (like cloud servers, and even China’s 12306 ticketing system), all demonstrate that software systems can be reliable. Here, we introduce an important concept in software development: system reliability (系统可靠性), with common metrics like 99.9%, 99.99%, or 99.999% reliability, referring to the annual uptime. In any engineering project, reliability is considered a key metric, and common methods for improving it include multi-center/multi-node computing, geo-redundancy, DevOps mechanisms, error correction, and maintenance.

In a sense, contemporary software engineering does indeed extend the 1980s’ realization, treating the potential for unreliability as a fundamental premise of R&D. It does not seek 100% perfect reliability, nor does it view 100% reliability as a necessary component. Instead, it treats “availability” (可用性) as the key metric.

The Post-1980s Negotiation

Through the SDI debate, the opinions of software engineering experts were finally respected. David Parnas’s resignation from the SDI computing panel became a landmark event. His argument—that a complex system with over ten million lines of code could not undergo realistic operational testing, as the only true test would be the outbreak of nuclear war—became an extremely potent piece of evidence. From this point on, the computing knowledge repertoire finally achieved an authoritative status at the highest levels, similar to that of the physics repertoire.

This introduces Professor Slayton’s key conclusion: the “arbitrary complexity” (任意复杂性) of contemporary weapons systems has increased dramatically, demanding that both reliability and adaptability be increased indefinitely. However, these two requirements are fundamentally irreconcilable in practice. Reliability implies a perfect response to specific scenarios, achieved through repeated testing, whereas adaptability demands flexible adaptation to various operational scenarios. Therefore, contemporary defense systems, which face complex and flexible application scenarios yet are still required to respond perfectly, represent a decision-making process that is doomed to failure.

The Performative Struggle for Discursive Power by Technocrats

Professor Slayton delineates two groups of technocrats in this book. First, the physicist-technocrats, including Jerome Wiesner, Hans Bethe, and Richard Garwin. They largely shared a background in the Manhattan Project and thus wielded immense influence; they were insiders on the President’s Science Advisory Committee (PSAC). (Zuoyue Wang, in In Sputnik’s Shadow, has discussed the influence of PSAC in detail.) Second, the computing-technocrats emerged later, including Jay Forrester, J.C.R. Licklider, Barry Boehm, and David Parnas. Their role transformed from that of subordinate “craftsmen” to technocrats.

What interests me is the way they competed for discursive power. Aside from closed-door meetings like PSAC, congressional hearings and the mass media became the most important avenues for winning this struggle and gaining public attention.

Performance (表演性) via the mass media became a key strategy. This performance is a carefully designed transmission of information to the public, and also actualizes the role of the American media as the “Fourth Estate.” However, the media’s choices were not purely objective or neutral. The 1968 Scientific American article opposing the ABM was dominated by physicists. This biased dissemination only amplifies the influence of one side, while the opposing side, to counter, often resorts to more assertive or headline-grabbing conclusions. This can also be seen in the New York Times‘s front-page reporting on David Parnas’s resignation letter.

CNN’s live broadcast of the Patriot missile interceptions during the 1991 Gulf War also became a performance for the public. Although it was later verified that the Patriot’s interception rate was near zero, the media constructed a myth of technological success. The media’s pursuit of numbers, which symbolize modernity, became a battle of interpretation. Just as William Deringer demonstrated in Calculated Values, numbers, which appear rational, neutral, and objective, are instead meticulously packaged as weapons to validate one’s own viewpoint.

Additional Real-World Variables

This book uses the struggle between two disciplinary repertoires as its model to analyze weapons systems R&D since the Cold War. However, this model by itself perhaps cannot account for other real-world variables.

  1. The development of a discipline is inherently temporal. The software field, having only emerged in the 1960s, naturally lacked the academic accumulation to compete with the field of physics, which has centuries of accumulation. This imbalance in disciplinary power is not necessarily caused by inherent differences between the disciplines, but by their different stages of development.
  2. The Military-Industrial Complex (MIC), as the most significant driver of weapons systems R&D during the Cold War, does not receive adequate discussion in this book. Works like Michael H. Armacost’s The Politics of Weapons Innovation have detailed the R&D control struggle between the Army and Air Force over intermediate-range ballistic missiles. From this, we can see the fierce struggle for discursive power between the MIC and the technocrats. Physicists were the objects of intense competition among different factions of the MIC, whereas computer scientists only became a group to be courted after the 1980s, once their discipline had matured. (Of course, this is also directly related to the rising power of Silicon Valley.)
  3. After introducing the construction of disciplinary repertoires, Professor Slayton imbues the repertoires themselves with considerable agency. However, the actual subjects are still the physicists and computing experts. They are the agents of action, and their agency is the primary subject in this field of negotiation. And in this massive arena of competing interests, the agency of a constructed “knowledge repertoire” seems insufficient by itself to contend with high-energy political and economic variables. Therefore, shouldn’t the “knowledge repertoire” be seen more as an influencing factor, one that is integrated with the strategies of the actors?

Index