The following is a repost of an edebate post from Gordon Mitchell, Director of UPitt’s William Pitt Debating Union. It is one of the most comprehensive and useful posts on ethics and evidence I have seen to date. I believe this will be a big issue in the coming years in debate and getting ahead of the curve as far as evaluating evidence is concerned will be helpful.
What is a legitimate source to cite as evidence in a policy debate contest round? Should forensic specialists publish material that addresses the topic area on which they are currently coaching? How can members of the policy debate community relate their simulation-based research to “real world” decision-making and analysis of relevant policy issues?
These questions about publicity and publication have received extended treatment recently on debate lists and discussion boards, with conversation sparked by specific events. On the high school level, controversy swirled in the wake of revelations that a high school coach apparently published a topic-relevant article using a pseudonym with fictitious credentials (Marburry, 2009). Then two Center for Strategic and International Studies analysts (CSIS JY, 2009, 8) successfully persuaded college debaters and forensics specialists to select nuclear weapons policy as the 2009-2010 intercollegiate policy debate topic area, in part by claiming, “there will be a demand for your expertise in the policy analysis community.”
Roughly speaking, the act of publishing entails preparing material for public uptake, and then announcing the event to facilitate circulation. For many years, this process was structured largely as an economic transaction between authors and printing press owners, with editors often serving as gatekeepers who would vet and filter material. Readers relied on markers of professionalism (quality of print and ink, circulation, reputation of editors) to judge the relative credibility of publications. In the academy, referees employed similar metrics to assess a given writer’s degree of scholarly authority, metrics that were rooted in principles of publication scarcity and exclusivity – that a scholar’s caliber was in part demonstrated by his or her ability to persuade editors to publish their work.
Acceleration of Internet communication and the advent of digital online publication destabilized these arrangements fundamentally. Publication, previously a one-to-many transaction, has become a many-to-many enterprise unfolding across a complex latticework of internetworked digital nodes. Now weblogs, e-books, online journals, and print-on-demand book production and delivery systems make it possible for a whole new population of prospective authors to publish material in what Michael Jensen (2008), National Academy of Sciences Director of Strategic Web Communications, calls an “era of content democracy and abundance.”
In content abundance, the key challenge for readers and referees has less to do with finding scarce information, and more to do with sorting wheat from the proverbial chaff (the ever-burgeoning surplus of digital material available online). The pressing nature of this information-overload challenge has spurred invention of what Jensen (2007) calls “new metrics of scholarly authority” – essentially, new ways of measuring the credibility and gravitas of knowledge producers in a digital world of content abundance.
For Jensen, traditional “authority 1.0″ metrics, such as book reviews, peer-reviewed journal publications, and journal “impact factors,” are gradually being supplanted in popular culture by “authority 2.0″ metrics such as Google page ranks, blog post trackbacks, and diggs. Jensen’s point is not that these new metrics of scholarly authority are necessarily superior to the old measurement tools, or that they are especially reliable or appropriate for assessing any given author’s credibility (especially in an academic context). His point is that they are developing very fast, and becoming more widespread as markers of intellectual gravitas: “Scholarly authority, the nuanced, deep, perspective-laden authority we hold dear, is under threat by the easily-computable metrics of popularity, famousness, and binary votes, which are amplified by the nature of abundance-jaded audiences” (Jensen, 2008, 25).
While Jensen (2008, 25) sees this current trend from an era to content scarcity to an era of content abundance as a “revolutionary shift,” a “cultural U-turn so extreme it’s hard to comprehend,” he also eschews determinism by stipulating that this “is a transformation we can influence.” One key avenue of influence entails invention and refinement of what Jensen calls “authority 3.0″ metrics – sophisticated instruments that track and measure knowledge creation and dissemination in ways that blend traditional “authority 1.0″ principles such as peer review with newfangled digital tools like Reference Finder (a National Academies Press “fuzzy matching” search tool) and Microsoft’s Photosynth.
How does this relate to the world of policy debate? Certainly the new metrics present tools for debaters to measure the credibility of online publications, a task that is becoming increasingly salient as digital material increasingly finds its way into contest rounds (see e.g. Alderete, 2009; Phillips, 2009). But there are also other connections. Jensen’s brother was a successful high school debater under Randy McCutcheon at East High School in Lincoln, Nebraska, so Jensen knows all about inherency, index cards and spewdown delivery. And in the debate community’s early efforts at collaborative online knowledge production (such as DebateResults, Planet Debate, Cross-x.com and caselist wikis), Jensen sees seeds of new metrics of scholarly authority.
Consider what takes place in a debate tournament contest round, one held under today’s conditions of digitally networked transparency. Debaters present their research on both sides of a given topic, citing evidence to support their claims. Those claims (and increasingly, the precise citations or exact performative elements supporting them) are often transcribed and then uploaded to a publicly available digital archive. The yield is a remarkably intricate and detailed map of a whole set of interwoven policy controversies falling under the rubric of yearlong national policy debate resolution. Who cares about this? Of course debaters and forensics specialists preparing for the next tournament take interest, as the map provides a navigational tool that leverages preparation for future contests. But recall the CSIS JY (2009) pitch to college debaters and forensics specialists researching nuclear weapons policy: “There will be a demand for your expertise in the policy analysis comm
unity.” Let us reflect on how this demand could manifest, and how intercollegiate debate might meet it halfway.
* Professional training. On a most basic level, the CSIS JY “public merits” case for the nuclear weapons policy topic area is colored by the legacy of William Taylor, former vice president and now senior adviser at CSIS. Taylor created a fellowship program that brought recently graduated intercollegiate debaters to Washington, D.C. for work at his highly influential security think tank. Since 1997, a host of former debaters have utilized their debate research skills in applied policy analysis for CSIS, often on nuclear issues. Meanwhile, other former debaters have ascended to prominent posts in academia, where they often mentor scholars on nuclear policy. In this respect, debate training on nuclear policy today might result in career advancement in a research field tomorrow, where there is “demand” for the unique type of skill-set honed in the crucible of debate competition. These types of opportunities could be cultivated further by through informal recruitment channels, inf
ormation exchange, and perhaps development of additional fellowship programs modeled on the CSIS Taylor initiative.
* Digital debate archive (DDA) as a public research resource. With refinement (perhaps through incorporation of Django, GeNIe and SMILE web tools), online caselist wikis could be transformed into publicly accessible databases designed to provide policy-makers, journalists, and others resources for interactive study of the nuclear weapons policy controversy. Let’s say a reporter for the Global Security Newswire is following the START arms control beat. She could visit the DDA and not only pull up hundreds of the contest rounds where arms control was debated; she could click through to find out how certain teams deployed similar arguments, which citations were getting the most play, which sources were cited most frequently by winning teams, and which citations on arms control were new at the last tournament. Such post-mortem analysis of the debate process could enable non-debaters to “replay the chess match” that took place at unintelligible speed during a given contest round (
Jensen, 2009; see also Woods, et al., 2006).
* Authority 3.0 metrics. The marriage of a DDA with Jon Bruschke’s ingenious DebateResults online resource could pave the way for a host of new statistical measures with great salience for a wide array of audiences. Internally, the debate community could benefit from development of a new set of measures and corresponding rewards associated with research outcomes. Who are the most productive individual researchers in the nation? The most original? Which debater or forensics specialist has the greatest “research impact factor” (a possible metric measuring the persons whose arguments tend to be picked up and replicated most by others in contest round competition). A system for tracking and publishing answers to these questions could open up a new symbolic reward economy, with potential to counter the drift toward sportification entailed in strict tournament-outcome oriented reward structure. The same system could be used to track frequency and mode of source citations, yielding
statistics that could answer such questions as: Which experts on nuclear weapons policy are cited most frequently in contest rounds? Which experts are cited most broadly (on a wide range of sub-topics)? When a given expert is sided by one side, who are the experts most likely to be cited by the opposing side? Scholars are increasingly using similar data to document their research impact during professional reviews (see Meho, 2007). Since the intercollegiate policy debate is driven by an intellectual community committed to the rigorous standards of evidence analysis and argument testing, a strong case could be made that citation in that community is more meaningful than an website hit indicating that a scholar’s work product was viewed by an anonymous person browsing the Internet (this is a good example of the difference between a 3.0 and 2.0 scholarly metric).
* Publication of policy analysis. One exemplar of this mode of engagement comes from the 1992-1993 intercollegiate policy debate season, when the University of Texas extended its advocacy of a Flood Action Plan affirmative case beyond the contest round grid: “The skills honed during preparation for and participation in academic debate can be utilized as powerful tools in this regard. Using sophisticated research, critical thinking, and concise argument presentation, argumentation scholars can become formidable actors in the public realm, advocating on behalf of a particular issue, agenda, or viewpoint. For competitive academic debaters, this sort of advocacy can become an important extension of a long research project culminating in a strong personal judgment regarding a given policy issue and a concrete plan to intervene politically in pursuit of those beliefs. For example, on the 1992-93 intercollegiate policy debate topic dealing with U.S. development assistance policy, th
e University of Texas team ran an extraordinarily successful affirmative case that called for the United States to terminate its support for the Flood Action Plan, a disaster-management program proposed to equip the people of Bangladesh to deal with the consequences of flooding. During the course of their research, Texas debaters developed close working links with the International Rivers Network, a Berkeley-based social movement devoted to stopping the Flood Action Plan. These links not only created a fruitful research channel of primary information to the Texas team; they helped Texas debaters organize sympathetic members of the debate community to support efforts by the International Rivers Network to block the Flood Action Plan. The University of Texas team capped off an extraordinary year of contest round success arguing for a ban on the Flood Action Plan with an activist project in which team members supplemented contest round advocacy with other modes of political org
anizing. Specifically, Texas debaters circulated a petition calling for suspension of the Flood Action Plan, organized channels of debater input to ‘pressure points’ such as the World Bank and U.S. Congress, and solicited capital donations for the International Rivers Network. In a letter circulated publicly to multiple audiences inside and outside the debate community, Texas assistant coach Ryan Goodman linked the arguments of the debate community to wider public audiences by explaining the enormous competitive success of the ban Flood Action Plan affirmative on the intercollegiate tournament circuit. The debate activity, Goodman wrote, ‘brings a unique aspect to the marketplace of ideas. Ideas most often gain success not through politics, the persons who support them, or through forcing out other voices through sheer economic power, but rather on their own merit’ (1993). To emphasize the point that this competitive success should be treated as an important factor in public
policy-making, Goodman compared the level of rigor and intensity of debate research and preparation over the course of a year to the work involved in completion of masters’ thesis” (Mitchell, 1998).
Regarding the latter engagement mode, publication of policy analysis, it is illuminating to compare the 1992-1993 Texas Flood Action Plan initiative with Justin Skarb’s recent publication of debate-related research on solar-powered satellites with Space Review. While the work products stemming from both projects evince a level of polish and detail that is de rigueur for advocates trained in the art of policy debate, there are significant differences. One significant difference concerns representation of authorship status to external audiences, with the Texas project backed by the actual identities of the debaters and forensics specialists who worked on the development assistance topic, and the Skarb piece carrying the pseudonym “John Marburry” (replete with fictitious qualifications). Although use of pen names by authors is uncommon, it is sometimes justified under special circumstances, and even celebrated in fantastic cases. However, in these exceptional instances (e.g. for
mer CIA analyst Michael Scheuer’s publication of a book by Brassey’s as “anonymous”), usually readers gain confidence that the editor knows the author’s real identity, and sanctions use of a pen name for a justified reason. As Space Review editor Jeff Foust’s account attests, this did not appear to be the case in the Skarb affair:
“I added the note crediting Skarb the same day the article was originally published (April 27), after getting a request to do so from ‘Marburry’ (he said that the omission was an oversight because ‘neither of them’ were sure the article would even be published, and that if it was not possible to do so it was fine with him.) At the time I had no reason to believe that Marburry was not who he said he was, or that he was the same person as Skarb. I am waiting to hear back from Marburry/Skarb regarding this situation.” (Foust, 2009)
A second level of distinction is that the Texas project transparently links contest round research with public advocacy, drawing explicitly upon the academic debate experience to ground public claims regarding undesirability of the Flood Action Plan. In contrast, the Skarb piece is opaque with respect to its origin as a work product flowing from debate research on the 2008-2009 interscholastic alternative energy topic. The result of such opacity is a missed opportunity for Skarb to highlight the methodology of debate as constitutive of his work product, an aspect that CSIS JY suggests may be especially appealing for external audiences.
To more fully unpack this final point, it may be useful to revisit David Zarefsky’s (1972, 1979) theory of academic debate as hypothesis testing. During the heyday of policy debate’s “paradigm wars,” hypothesis testing had its share of adherents, some in the judging ranks who applied the paradigm as a tool for adjudication of individual contest rounds, and others in the debating ranks, who used the paradigm to justify certain argumentative strategies (e.g. multiple, conditional and contradictory negative counterplans).
Lost in this process of reduction was Zarefsky’s vision of academic debate as a vehicle to transport the theory and practice of argumentation to wider society (see e.g. Sillars & Zarefsky, 1975; Zarefsky, 1980). Hypothesis testing, in this wider frame, was a construct for establishing the gravitas and authority of forensics specialists in conversations about the nature of argumentation beyond the contest round setting. Here, the analogy linking debate to scientific hypothesis testing was not designed to show how debate itself was a scientific process, but rather to alert external audiences to the fact that academic debate, while deviating significantly from established patterns of scientific inquiry, features its own set of rigorous procedures for the testing of argumentative hypothesis. Skarb missed a chance to leverage his claims regarding solar power satellite policy by making a similar point, an oversight that future attempts of a similar sort might do well to bear in min
Alderete, T. (2009). Just musings and questions. Standards for Evidence thread. Cross-X.com website. May 13.http://www.cross-x.com/vb/showthread.php?t=992035&highlight=alderete+skarb&page=4
CSIS JY. (2009). Nuclear policy topic paper — draft. April 23. Cross Examination Debate Association website. Online at http://topic.cedadebate.org/?q=node/11.
Foust, J. (2009). Personal correspondence with the author. May 14.
Jensen, M. (2007). The new metrics of scholarly authority. Chronicle of Higher Education, June 15. Online at:http://chronicle.com/free/v53/i41/41b00601.htm.
Jensen, M. (2008). Scholarly authority in the age of abundance: Retaining relevance within the new landscape. Keynote address at the JSTOR Annual Participating Publisher’s Conference. May 13. Online at:http://www.nap.edu/staff/mjensen/jstor.htm.
Jensen, M. (2009). Personal correspondence with the author. February 27.
Marburry, J. (2009). Space-based solar power: right here, right now? Space Review, April 27. Online at:http://www.thespacereview.com/article/1359/1.
Meho, L.I. (2007). The rise and rise of citation analysis. Physics World, January, 32-36.
Mitchell, G.R. (1998). Pedagogical possibilities for argumentative agency in academic debate. Argumentation & Advocacy, 35, 41-60.
Phillips, S. (2009). SPS article controversy. The 3NR: A Collaborative Blog about High School Policy Debate. May 11. Online at: http://www.the3nr.com/2009/05/11/sps-article-controversy/
Sillars, M.O. & D. Zarefsky. (1975). Future goals and roles of forensics. In J.H. McBath (Ed.), Forensics as communication: The argumentative perspective (pp. 83-93). Skokie, Illinois: National Textbook Company.
Woods, C., Brigham, M., Konishi, T., Heavner, B. Rief, J., Saindon, B., & Mitchell, G.R. (2006). Deliberating debate’s digital futures. Contemporary Argumentation and Debate, 27, 81-105.
Zarefsky, D. (1972). A reformulation of the concept of presumption. Paper presented at the Central States Speech Association Convention. April 7. Chicago, Illinois.
Zarefsky, D. (1979). Argument as hypothesis-testing. In David A. Thomas (Ed.), Advanced debate: Readings in theory, practice and teaching (pp. 427-437). Skokie, Illinois: National Textbook Company.
Zarefsky, D. (1980). Argumentation and forensics. In J. Rhodes & S. Newell (Eds.), Proceedings of the summer conference on argumentation (pp. 20-25). Annandale, Virginia: Speech Communication Association.