Its debatable…Speak Up!

June 13, 2009

Gordon Mitchell on Ethics and Evidence – Repost from edebate 5-18-09

Filed under: Academics,Blogging,Debate,Pedagogy,Research,Technology — bk2nocal @ 1:23 pm

The following is a repost of an edebate post from Gordon Mitchell, Director of UPitt’s William Pitt Debating Union.  It is one of the most comprehensive and useful posts on ethics and evidence I have seen to date.  I believe this will be a big issue in the coming years in debate and getting ahead of the curve as far as evaluating evidence is concerned will be helpful.

________________________________________________________________________________________

What is a legitimate source to cite as evidence in a policy debate contest round? Should forensic specialists publish material that addresses the topic area on which they are currently coaching? How can members of the policy debate community relate their simulation-based research to “real world” decision-making and analysis of relevant policy issues?

These questions about publicity and publication have received extended treatment recently on debate lists and discussion boards, with conversation sparked by specific events. On the high school level, controversy swirled in the wake of revelations that a high school coach apparently published a topic-relevant article using a pseudonym with fictitious credentials (Marburry, 2009). Then two Center for Strategic and International Studies analysts (CSIS JY, 2009, 8) successfully persuaded college debaters and forensics specialists to select nuclear weapons policy as the 2009-2010 intercollegiate policy debate topic area, in part by claiming, “there will be a demand for your expertise in the policy analysis community.”

Roughly speaking, the act of publishing entails preparing material for public uptake, and then announcing the event to facilitate circulation. For many years, this process was structured largely as an economic transaction between authors and printing press owners, with editors often serving as gatekeepers who would vet and filter material. Readers relied on markers of professionalism (quality of print and ink, circulation, reputation of editors) to judge the relative credibility of publications. In the academy, referees employed similar metrics to assess a given writer’s degree of scholarly authority, metrics that were rooted in principles of publication scarcity and exclusivity – that a scholar’s caliber was in part demonstrated by his or her ability to persuade editors to publish their work.

Acceleration of Internet communication and the advent of digital online publication destabilized these arrangements fundamentally. Publication, previously a one-to-many transaction, has become a many-to-many enterprise unfolding across a complex latticework of internetworked digital nodes. Now weblogs, e-books, online journals, and print-on-demand book production and delivery systems make it possible for a whole new population of prospective authors to publish material in what Michael Jensen (2008), National Academy of Sciences Director of Strategic Web Communications, calls an “era of content democracy and abundance.”

In content abundance, the key challenge for readers and referees has less to do with finding scarce information, and more to do with sorting wheat from the proverbial chaff (the ever-burgeoning surplus of digital material available online). The pressing nature of this information-overload challenge has spurred invention of what Jensen (2007) calls “new metrics of scholarly authority” – essentially, new ways of measuring the credibility and gravitas of knowledge producers in a digital world of content abundance.

For Jensen, traditional “authority 1.0” metrics, such as book reviews, peer-reviewed journal publications, and journal “impact factors,” are gradually being supplanted in popular culture by “authority 2.0” metrics such as Google page ranks, blog post trackbacks, and diggs. Jensen’s point is not that these new metrics of scholarly authority are necessarily superior to the old measurement tools, or that they are especially reliable or appropriate for assessing any given author’s credibility (especially in an academic context). His point is that they are developing very fast, and becoming more widespread as markers of intellectual gravitas: “Scholarly authority, the nuanced, deep, perspective-laden authority we hold dear, is under threat by the easily-computable metrics of popularity, famousness, and binary votes, which are amplified by the nature of abundance-jaded audiences” (Jensen, 2008, 25).

While Jensen (2008, 25) sees this current trend from an era to content scarcity to an era of content abundance as a “revolutionary shift,” a “cultural U-turn so extreme it’s hard to comprehend,” he also eschews determinism by stipulating that this “is a transformation we can influence.” One key avenue of influence entails invention and refinement of what Jensen calls “authority 3.0” metrics – sophisticated instruments that track and measure knowledge creation and dissemination in ways that blend traditional “authority 1.0” principles such as peer review with newfangled digital tools like Reference Finder (a National Academies Press “fuzzy matching” search tool) and Microsoft’s Photosynth.

How does this relate to the world of policy debate? Certainly the new metrics present tools for debaters to measure the credibility of online publications, a task that is becoming increasingly salient as digital material increasingly finds its way into contest rounds (see e.g. Alderete, 2009; Phillips, 2009). But there are also other connections. Jensen’s brother was a successful high school debater under Randy McCutcheon at East High School in Lincoln, Nebraska, so Jensen knows all about inherency, index cards and spewdown delivery. And in the debate community’s early efforts at collaborative online knowledge production (such as DebateResults, Planet Debate, Cross-x.com and caselist wikis), Jensen sees seeds of new metrics of scholarly authority.

Consider what takes place in a debate tournament contest round, one held under today’s conditions of digitally networked transparency. Debaters present their research on both sides of a given topic, citing evidence to support their claims. Those claims (and increasingly, the precise citations or exact performative elements supporting them) are often transcribed and then uploaded to a publicly available digital archive. The yield is a remarkably intricate and detailed map of a whole set of interwoven policy controversies falling under the rubric of yearlong national policy debate resolution. Who cares about this? Of course debaters and forensics specialists preparing for the next tournament take interest, as the map provides a navigational tool that leverages preparation for future contests. But recall the CSIS JY (2009) pitch to college debaters and forensics specialists researching nuclear weapons policy: “There will be a demand for your expertise in the policy analysis comm
unity.” Let us reflect on how this demand could manifest, and how intercollegiate debate might meet it halfway.

* Professional training. On a most basic level, the CSIS JY “public merits” case for the nuclear weapons policy topic area is colored by the legacy of William Taylor, former vice president and now senior adviser at CSIS. Taylor created a fellowship program that brought recently graduated intercollegiate debaters to Washington, D.C. for work at his highly influential security think tank. Since 1997, a host of former debaters have utilized their debate research skills in applied policy analysis for CSIS, often on nuclear issues. Meanwhile, other former debaters have ascended to prominent posts in academia, where they often mentor scholars on nuclear policy. In this respect, debate training on nuclear policy today might result in career advancement in a research field tomorrow, where there is “demand” for the unique type of skill-set honed in the crucible of debate competition. These types of opportunities could be cultivated further by through informal recruitment channels, inf
ormation exchange, and perhaps development of additional fellowship programs modeled on the CSIS Taylor initiative.

* Digital debate archive (DDA) as a public research resource. With refinement (perhaps through incorporation of Django, GeNIe and SMILE web tools), online caselist wikis could be transformed into publicly accessible databases designed to provide policy-makers, journalists, and others resources for interactive study of the nuclear weapons policy controversy. Let’s say a reporter for the Global Security Newswire is following the START arms control beat. She could visit the DDA and not only pull up hundreds of the contest rounds where arms control was debated; she could click through to find out how certain teams deployed similar arguments, which citations were getting the most play, which sources were cited most frequently by winning teams, and which citations on arms control were new at the last tournament. Such post-mortem analysis of the debate process could enable non-debaters to “replay the chess match” that took place at unintelligible speed during a given contest round (
Jensen, 2009; see also Woods, et al., 2006).

* Authority 3.0 metrics. The marriage of a DDA with Jon Bruschke’s ingenious DebateResults online resource could pave the way for a host of new statistical measures with great salience for a wide array of audiences. Internally, the debate community could benefit from development of a new set of measures and corresponding rewards associated with research outcomes. Who are the most productive individual researchers in the nation? The most original? Which debater or forensics specialist has the greatest “research impact factor” (a possible metric measuring the persons whose arguments tend to be picked up and replicated most by others in contest round competition). A system for tracking and publishing answers to these questions could open up a new symbolic reward economy, with potential to counter the drift toward sportification entailed in strict tournament-outcome oriented reward structure. The same system could be used to track frequency and mode of source citations, yielding
statistics that could answer such questions as: Which experts on nuclear weapons policy are cited most frequently in contest rounds? Which experts are cited most broadly (on a wide range of sub-topics)? When a given expert is sided by one side, who are the experts most likely to be cited by the opposing side? Scholars are increasingly using similar data to document their research impact during professional reviews (see Meho, 2007). Since the intercollegiate policy debate is driven by an intellectual community committed to the rigorous standards of evidence analysis and argument testing, a strong case could be made that citation in that community is more meaningful than an website hit indicating that a scholar’s work product was viewed by an anonymous person browsing the Internet (this is a good example of the difference between a 3.0 and 2.0 scholarly metric).

* Publication of policy analysis. One exemplar of this mode of engagement comes from the 1992-1993 intercollegiate policy debate season, when the University of Texas extended its advocacy of a Flood Action Plan affirmative case beyond the contest round grid: “The skills honed during preparation for and participation in academic debate can be utilized as powerful tools in this regard. Using sophisticated research, critical thinking, and concise argument presentation, argumentation scholars can become formidable actors in the public realm, advocating on behalf of a particular issue, agenda, or viewpoint. For competitive academic debaters, this sort of advocacy can become an important extension of a long research project culminating in a strong personal judgment regarding a given policy issue and a concrete plan to intervene politically in pursuit of those beliefs. For example, on the 1992-93 intercollegiate policy debate topic dealing with U.S. development assistance policy, th
e University of Texas team ran an extraordinarily successful affirmative case that called for the United States to terminate its support for the Flood Action Plan, a disaster-management program proposed to equip the people of Bangladesh to deal with the consequences of flooding. During the course of their research, Texas debaters developed close working links with the International Rivers Network, a Berkeley-based social movement devoted to stopping the Flood Action Plan. These links not only created a fruitful research channel of primary information to the Texas team; they helped Texas debaters organize sympathetic members of the debate community to support efforts by the International Rivers Network to block the Flood Action Plan. The University of Texas team capped off an extraordinary year of contest round success arguing for a ban on the Flood Action Plan with an activist project in which team members supplemented contest round advocacy with other modes of political org
anizing. Specifically, Texas debaters circulated a petition calling for suspension of the Flood Action Plan, organized channels of debater input to ‘pressure points’ such as the World Bank and U.S. Congress, and solicited capital donations for the International Rivers Network. In a letter circulated publicly to multiple audiences inside and outside the debate community, Texas assistant coach Ryan Goodman linked the arguments of the debate community to wider public audiences by explaining the enormous competitive success of the ban Flood Action Plan affirmative on the intercollegiate tournament circuit. The debate activity, Goodman wrote, ‘brings a unique aspect to the marketplace of ideas. Ideas most often gain success not through politics, the persons who support them, or through forcing out other voices through sheer economic power, but rather on their own merit’ (1993). To emphasize the point that this competitive success should be treated as an important factor in public
policy-making, Goodman compared the level of rigor and intensity of debate research and preparation over the course of a year to the work involved in completion of masters’ thesis” (Mitchell, 1998).

Regarding the latter engagement mode, publication of policy analysis, it is illuminating to compare the 1992-1993 Texas Flood Action Plan initiative with Justin Skarb’s recent publication of debate-related research on solar-powered satellites with Space Review. While the work products stemming from both projects evince a level of polish and detail that is de rigueur for advocates trained in the art of policy debate, there are significant differences. One significant difference concerns representation of authorship status to external audiences, with the Texas project backed by the actual identities of the debaters and forensics specialists who worked on the development assistance topic, and the Skarb piece carrying the pseudonym “John Marburry” (replete with fictitious qualifications). Although use of pen names by authors is uncommon, it is sometimes justified under special circumstances, and even celebrated in fantastic cases. However, in these exceptional instances (e.g. for
mer CIA analyst Michael Scheuer’s publication of a book by Brassey’s as “anonymous”), usually readers gain confidence that the editor knows the author’s real identity, and sanctions use of a pen name for a justified reason. As Space Review editor Jeff Foust’s account attests, this did not appear to be the case in the Skarb affair:

“I added the note crediting Skarb the same day the article was originally published (April 27), after getting a request to do so from ‘Marburry’ (he said that the omission was an oversight because ‘neither of them’ were sure the article would even be published, and that if it was not possible to do so it was fine with him.)  At the time I had no reason to believe that Marburry was not who he said he was, or that he was the same person as Skarb.  I am waiting to hear back from Marburry/Skarb regarding this situation.” (Foust, 2009)

A second level of distinction is that the Texas project transparently links contest round research with public advocacy, drawing explicitly upon the academic debate experience to ground public claims regarding undesirability of the Flood Action Plan. In contrast, the Skarb piece is opaque with respect to its origin as a work product flowing from debate research on the 2008-2009 interscholastic alternative energy topic. The result of such opacity is a missed opportunity for Skarb to highlight the methodology of debate as constitutive of his work product, an aspect that CSIS JY suggests may be especially appealing for external audiences.

To more fully unpack this final point, it may be useful to revisit David Zarefsky’s (1972, 1979) theory of academic debate as hypothesis testing. During the heyday of policy debate’s “paradigm wars,” hypothesis testing had its share of adherents, some in the judging ranks who applied the paradigm as a tool for adjudication of individual contest rounds, and others in the debating ranks, who used the paradigm to justify certain argumentative strategies (e.g. multiple, conditional and contradictory negative counterplans).

Lost in this process of reduction was Zarefsky’s vision of academic debate as a vehicle to transport the theory and practice of argumentation to wider society (see e.g. Sillars & Zarefsky, 1975; Zarefsky, 1980). Hypothesis testing, in this wider frame, was a construct for establishing the gravitas and authority of forensics specialists in conversations about the nature of argumentation beyond the contest round setting. Here, the analogy linking debate to scientific hypothesis testing was not designed to show how debate itself was a scientific process, but rather to alert external audiences to the fact that academic debate, while deviating significantly from established patterns of scientific inquiry, features its own set of rigorous procedures for the testing of argumentative hypothesis. Skarb missed a chance to leverage his claims regarding solar power satellite policy by making a similar point, an oversight that future attempts of a similar sort might do well to bear in min
d.

REFERENCES

Alderete, T. (2009). Just musings and questions. Standards for Evidence thread. Cross-X.com website. May 13.http://www.cross-x.com/vb/showthread.php?t=992035&highlight=alderete+skarb&page=4

CSIS JY. (2009). Nuclear policy topic paper — draft. April 23. Cross Examination Debate Association website. Online at http://topic.cedadebate.org/?q=node/11.

Foust, J. (2009). Personal correspondence with the author. May 14.

Jensen, M. (2007). The new metrics of scholarly authority. Chronicle of Higher Education, June 15. Online at:http://chronicle.com/free/v53/i41/41b00601.htm.

Jensen, M. (2008). Scholarly authority in the age of abundance: Retaining relevance within the new landscape. Keynote address at the JSTOR Annual Participating Publisher’s Conference. May 13. Online at:http://www.nap.edu/staff/mjensen/jstor.htm.

Jensen, M. (2009). Personal correspondence with the author. February 27.

Marburry, J. (2009). Space-based solar power: right here, right now? Space Review, April 27. Online at:http://www.thespacereview.com/article/1359/1.

Meho, L.I. (2007). The rise and rise of citation analysis. Physics World, January, 32-36.

Mitchell, G.R. (1998). Pedagogical possibilities for argumentative agency in academic debate. Argumentation & Advocacy, 35, 41-60.

Phillips, S. (2009). SPS article controversy. The 3NR: A Collaborative Blog about High School Policy Debate. May 11. Online at: http://www.the3nr.com/2009/05/11/sps-article-controversy/

Sillars, M.O. & D. Zarefsky. (1975). Future goals and roles of forensics. In J.H. McBath (Ed.), Forensics as communication: The argumentative perspective (pp. 83-93). Skokie, Illinois: National Textbook Company.

Woods, C., Brigham, M., Konishi, T., Heavner, B. Rief, J., Saindon, B., & Mitchell, G.R. (2006). Deliberating debate’s digital futures. Contemporary Argumentation and Debate, 27, 81-105.

Zarefsky, D. (1972). A reformulation of the concept of presumption. Paper presented at the Central States Speech Association Convention. April 7. Chicago, Illinois.

Zarefsky, D. (1979). Argument as hypothesis-testing. In David A. Thomas (Ed.), Advanced debate: Readings in theory, practice and teaching (pp. 427-437). Skokie, Illinois: National Textbook Company.

Zarefsky, D. (1980). Argumentation and forensics. In J. Rhodes & S. Newell (Eds.), Proceedings of the summer conference on argumentation (pp. 20-25). Annandale, Virginia: Speech Communication Association.

March 16, 2009

Back from Birth and PDF to Word Computer

Filed under: Debate,NFA LD,Research,Technology — bk2nocal @ 5:08 pm

So, I was due to have a baby on April 12, but she decided to arrive about seven weeks early!  Mackenzie Claire was born on February 19 at 32 weeks and 4 days…I was in the hospital for about a week and she was in for three, but we are both at home and doing pretty well now.  Being more-or-less without tech for a number of weeks made me realize how much I appreciate it!  So, I thought I would share a tech idea with you all today!

For those of you who, when cutting evidence, find it totally frustrating when a PDF will not transfer to Word for your purposes, I have found this free PDF to Word converter.  I have not tried it yet, but it comes recommended by CSU Chico’s Technology & Learning Blog, so it should work pretty well.

Enjoy!  And good luck to everyone at the various national tournaments coming up in the next few weeks!

June 27, 2008

Series: Web 2.0 for Forensics – Part I

I’ve been trying to incorporate a little more of the web 2.0 programs in my academic life, and this has led me to consider the way these same programs can be used for forensics.  So, I am going to start brainstorming ideas for using different tech to make our forensics lives easier and turn them into a series of blogs.  I’m sure that many of these are already being used by those who are more advanced in the web 2.0 experience than I am, but hopefully it may spark some ideas for you to expand your technological helpers for forensics.  Please feel free to post any additional items in the comments section and the series will continue on a weekly-or-so basis and as other items strike my fancy!

This first blog in the series will include wikis, facebook and del.icio.us.

WIKIS

I began using a wiki in my Argumentation and Debate class last semester to collect the evidence that students turned in.  I had them turn in the evidence on the wiki on a page with their name on it.  This allowed me to collect evidence without having to carry around a bunch of papers, make corrections to the materials electronically, and be sure that they were doing the evidence assignments electronically.  In addition, the students could search through all of the evidence from the class using the “search” function on the wiki.  So, when they were constructing affirmatives and negatives, they could easily do word searches on the topic they were working on and get all the different evidence found by their classmates.

I am also starting a wiki for our team.  This will be a clearinghouse of information, where I can post tournament invitations, articles for debate or speech topic ideas, results from tournaments, pictures from tournaments, etc.  Individuals on the team can have access to add things themselves.  It makes it so much easier than having a file cabinet in my office or an in-basket as everyone has immediate access from wherever they are. I think this will make things much easier on me and the students.

FACEBOOK

I was late coming to Facebook.  In all honesty, I avoided it like the plague for the past few years.  But, I am a convert.  I am convinced that this is the new email.  The listservs of the 90s changed the face of forensics, with national participants able to communicate with everyone else in the nation in one message and with quick response.  Facebook allows that same level of communication, but adds so much more of a personalized exchange and a way to access those who don’t even know you exist.  I am going to focus on using facebook as a recruiting and PR tool, because that has been my experience with it so far.

Facebook is one of the most popular social networking programs in the world.  If someone isn’t on Facebook at this point, they probably will be in the next five years.  One of the first things I did when I got on Facebook was form a group for “Past and Present Members of CSU Chico Forensics” and invite everyone I knew who was on or had been on the team in the past.  From there, they informed their friends and others requested membership.  Now, I have a single location to post information and requests for alumni whenever I have something.  In addition, I have been contacted by incoming freshman who found the group and are interested in joining the team when they get here in September.  Its an easy way to get the information out that used to require a ton of posters and flyers and visits to classrooms, etc.  I look forward to using Facebook as a PR tool next year as well.

DEL.ICIO.US

If you have not used del.icio.us, you probably have seen it on the bottom of an article or blog you have read.  It is a tool that appears across the web and allows you and your students easy access to collecting information.  It is a “social bookmarking” program, that allows one person to bookmark articles and then make those bookmarked articles available to a group of people.  The program uses “tags” to identify the important information in the article (answers the “why did you bookmark this article?”) so you can search by tags an find all the pertinent articles on that subject.  Using del.icio.us you and your students can create a “webliography” of speech topics or debate topic articles that can then be easily accessible by everyone on the team.

I have to admit I have not used del.icio.us much, but I just read a blog on using it as a learning tool and it inspired me to consider using it for the team this semester.

There are a TON of different tools out there for incorporating web 2.0 into education and therefore forensics.  I think the key is to consider a few things before starting to use any of these tools:

(1)  What is this going to SAVE me having to do in the future?  If the answer is nothing, than it may not be worth it.  After all, we all have way too much to do to be adding things on to that list.  But, if its going to save you some time and effort in the future (e.g. using the wiki to post invitations saves printing, copying, etc. of schedules for the students – they can just log on and get it themselves whenever they want – all I have to do is post a link) than its worthwhile to learn a new skill or introduce a new routine.

(2)  How difficult is this going to be to use?  Is this something you or your students are already using for other purposes.  So, Facebook makes sense to me versus finding another social networking program because most of my students are already there, most of my recruits will be on there and many of my colleagues are/will be on there.  So, why use a different program that requires an additional logon, an additional post, and learning new methods of posting, groups, etc.?

(3)  Is this really adding value?  Sometimes I tend to use tech for tech’s sake.  I’m just fascinated by new things and since I can remember a time when most people didn’t own a computer, I am amazed at the access to information and different gadgets/programs we now have.  But, I often have to ask myself whether what I’m doing is really adding value to my life/academic experience or whether it is just something that is catching my eye.  I guess this is kind of the same as #1, but I think of it more as asking if it adds something of value to my life.  So, even if it doesn’t save me having to do something, if its something I find enjoyable or attractive or fun, I am more likely to continue doing it in the future.  If it doesn’t do any of that for me, than I’m probably going to spend a bunch of time learning how to use it, etc. and then not come back to it often enough to make it worth my while.

Look for Part II, where I’ll go googly over Google – docs, reader and calendar!

January 22, 2008

Flight Tracking Resource

Filed under: Forensics - General,Technology,Travel — bk2nocal @ 9:10 pm

Sorry for the slow start in the new year.  Lots of stuff going on and that whole vacation thing is happening.  But, I’m going back to class and back to a (hopefully) more regular posting schedule next week.

I came across a blog entry on Ian’s Messy Desk about this google resource for tracking flights.  It could come in handy for those trips when your whole team is not on the same flight and you have to make multiple trips to the airport.  And you can do the search on your phone if you have internet! 

Tech is good!  Sometimes…

December 14, 2007

Tech Tool for Flowing?

Filed under: Debate,Technology — bk2nocal @ 6:49 pm

Smart PenI came across this high tech pen over at think:lab and immediately thought of uses for it in debate.  Imagine being able to tap your flow and not only rehear what the speaker had said, but even play it back at a lower speed.  As a judge, I see this coming in mighty handy on those high speed theory duels.  I don’t know how the recording device would pick up the high speed speaking in a debate round, but its fun to dream about! 

Blog at WordPress.com.