A. J. Sherman
_____
Schools for Scandal
A recent almost casual trawl through the Net yielded between fourteen
and fifteen million sites containing the words “academic integrity,”
some 53,000 for “academic faculty wrongdoing,” and 45,500
for a special subset of wickedness, “historians’ misconduct.”
This almost inconceivable outpouring includes books, articles, speeches,
reviews, blogs ranging from the learned to the lunatic, every kind of
utterance on what is clearly a vast subject. But even allowing for inevitable
duplication, for simple nonsense, this enormous flood of words compels
astonishment, and some attempt at preliminary understanding if not comprehensive
explanation. Why all this attention to “academic integrity”
and “faculty wrongdoing,” and what do we mean by those phrases?
Why all that interest precisely now? What might conceivably be going on
behind those ivied walls? Is problematic or frankly unethical behavior
altogether rampant in the academic community?
The proliferation of disciplinary and investigative
bureaucracies within the academic world suggests that misconduct among
both faculty and students may in fact be quite widespread. There is by
now hardly an academic institution, however small or insular, that has
not established a more or less elaborate machinery for examining wrongdoing
on campus, ranging from plagiarism or falsification of data and other
violations of research or teaching norms, to sexual or other harassment,
and on to further criminal or otherwise disreputable breaches of campus
or government regulations. As Vice Presidents and sub-Deans for Academic
Integrity clone themselves, multiplying according to the iron logic of
bureaucratic expansion, the sheer number of these individuals and the
committees charged with examining questionable behavior suggests that
the fundamental integrity of research and learning, for which universities
and colleges were after all established, is very much under siege.
In confidence, some within the academy will
admit as much. Even those faculty members without formal responsibility
for such issues struggle mightily with questions of misconduct: how it
is to be defined, how pervasive it is, what its consequences should be
for both the perpetrators and the institution. There is scarcely a batch
of routine student papers, to take but one example, in which there may
not arise questions about false claims of research, deliberate or negligent
misuse of sources, fabrication of evidence, or unabashed purchase or downloading
of whole texts created by others, complete with bibliographies and footnotes.
Each grade handed out, each paper assigned, may bring with it the burden
of further investigation, argumentative appeals to this dean or that,
the unsubtle evidence that certain student consumers are chronically aggrieved,
if not potentially litigious. Even faculty spared the more punishing experience
of calling to account instances of student misbehavior are often compelled
to observe among their own peers questionable research practices, inappropriate
or downright larcenous use of university resources, varieties of exploitation
or discrimination, or other, occasionally florid, forms of mischief. At
least some of the strain academics chronically complain of arises from
the pressure requiring them often to adjudicate or at least investigate
suspected malfeasance and other questionable actions or utterances. The
sheer amount of time and effort devoted to these quasi-judicial proceedings
sometimes gives the university a peculiar atmosphere more appropriate
to a courthouse or attorney general’s office than a place of learning.
Moreover, in addition to university bodies, numerous off-campus institutes,
professional associations, foundations, and task forces proliferate to
examine academic misconduct, all attempting to formulate a code of acceptable
standards, regulations for investigating lapses, a uniform and fair procedure
for the regular investigation and disposition of individual cases. The
entire subject is indeed so complex, so intertwined with larger societal
concerns, that it obviously eludes examination within the compass of a
few pages; my attempt here is merely to focus attention on one aspect
of it, the serious trouble in which the historical profession now appears
to find itself. How extensive, in fact, is professional misconduct in
historical research and writing, and why are historians in particular
so much in the public eye these days? We have seen some well-publicized
cases of historians’ wrongdoing, ethical lapses ranging from frank
plagiarism to subtle or crude falsification of data; but as a recent discussion
in American Scientist ominously puts it, “we would likely
be misled to think that the rate at which problems are publicly reported
represents their actual frequency.” Anyone who has followed the
media in the past several years is aware of a series of scandals, significant
episodes of impropriety, involving in some cases very well known historians.
There is an almost ritual quality to the
scenario, as if all the actors are reading from a wearily familiar script.
A writer, often an academic, perhaps one of those “hybrid historians”
or biographers unabashedly on the fringes of show business, is caught
red-handed by some diligent researcher in the brazen, often extensive
theft of work created by others. This illicit taking, usually of specific
language but often too of major ideas or approaches to a subject, varies
from clumsy or adroit paraphrase to large-scale verbatim reproduction—but
it invariably amounts to plundering, without any form of acknowledgment,
texts published by other authors. The subsequent exposure, which can be
spectacular, is more likely to be featured in public media than in professional
or academic publications, and frequently follows some publicity-generating
event such as the award of a prize or the publication of one or more lavishly
favorable reviews. The more media-savvy and prominent the author, the
greater the brouhaha; indeed, some publishers cynically relish the free
publicity, the fuss and outrage that stimulate sales, perhaps to segments
of the public that might ordinarily be indifferent to the offending book
but are attracted by scandal spun as “controversy.” Those
writers stolen from, most of them in tandem with their publishers, will
in their turn play predictable roles, expressing in familiar terms their
shock, incredulity, and anger. On occasion, these public splutterings
are muted by express or implied threats of litigation by the perpetrator,
acting after the classic manner of a shameless wrongdoer fired after being
caught at the till, who then sues an employer for unjust dismissal and
loss of reputation. The scenario then requires that when presented with
irrefutable evidence, the writer guilty of poaching others’ work
at first asserts that the unacknowledged copying was minor, reflecting
haste or work overload, a mere irrelevance in composing a large book invariably
characterized as “major” or “important.” A further
wheeze is that any “mistakes” made were entirely the fault
of underlings, that army of researchers, document handlers, assistants
of all kinds who often are required to bring those “major”
and “important” books into the world. It can be taken for
granted that the more prominent an author, the more dismissive and condescending
he or she will be—as if too grand to be troubled, even briefly,
by some tiresome minor hack of the sort who might claim to have discovered
petty larceny in, say, a work on the order of War and Peace.
Careful research, acknowledgment of intellectual debts, fact checking,
accurate citations—these are things for little people.
When the filcher’s airy dismissiveness
and self-exculpation prove inadequate, and it becomes evident that serious
wrongdoing is being alleged, often very well-grounded charges of wrongful
appropriation, of—dread word—plagiarism, are without
exception dismissed as unfair, exaggerated, biased, or inaccurate, and
in any event inspired by envious, far lesser minds. As the clamor mounts,
and the publication of parallel passages demonstrates that actual theft
is indeed involved, the cribbing author then adopts the standard excuse
that a few errors may possibly have been made, that these were merely
inadvertent or unconscious, due to haste or the incompetence of others,
but that they were obviously blown out of all proportion by niggling critics
and in sum constituted perhaps half a dozen trivial oversights, nothing
to arouse excitement, let alone any comprehensive critical reappraisal
of reputation. These “oversights” are sometimes then deemed
adequately corrected by hastily inserted notes, or some form of citation
of sources, that may appear in paperback or subsequent editions. More
rarely, there may be some formula of explicit acknowledgment, or a public
or private apology to the writer whose work has been illicitly mined.
And then the controversy dies down, the journalists depart, the contemporary
attention span has reached its always brief limits. At this point, the
perpetrator may go on with lamentations and self-justifications, claiming
persecution, usually on the internet or in some other public forum. Indeed,
the congenial public role of suffering innocence, of offended virtue,
has occupied more than one writer accused of plagiarism for a period exceeding
the shelf life of the offending book. More often, the offending author
just subsides into a period of more or less embarrassed silence and then,
as we say, moves on. Seldom are there meaningful professional or even
social repercussions.
A quite recent case appears effortlessly
to follow the classic script. “E. E. Cummings Scholar Is Accused
of Plagiarism,” announces the New York Times, echoing an
article by a contributing editor of Harper’s Magazine,
who in his turn has declared that a “major biography” of the
poet, published just last year, “is jammed with instances of wholesale
borrowing—not only of research but of storytelling and language”
from an earlier biography written by another author some twenty-five years
ago. Christopher Sawyer-Laucanno, the author of the current biography,
which of course received favorable notice in Kirkus Reviews and
Library Journal, counters in his turn that he “obviously
missed some places that should have been documented,” but asserts
that these lapses were not conscious, and consequently accusations of
plagiarism are “inaccurate and unfair.” His publisher thereupon
announces, on a rather defensive note, that “there is a gross disparity
between a few mistakes in citation and the allegation of plagiarism,”
and helpfully reminds the reading public that the new biography is to
be distinguished from its predecessor because it has drawn on important
original material previously sequestered by the Cummings estate. The accused
author, in concluding his apologia to the Times, states that
his frequent acknowledged citations from the work which he allegedly copied
prove that he did not intentionally plunder from the earlier Cummings
biography. With a familiarity that must have been altogether wearying
to the Times editorial staff, he then slavishly follows standard form
by repeating that his unacknowledged use of the earlier scholar’s
work was “an oversight” and “not plagiarism.”
Déjà vu, all over again. Or, as curmudgeon commentator Florence
King wrote in the April 2002 issue of National Review, “plagiarism,
again. It seems to happen in waves, like high-school shootings. Three
or four in a row, then a lull, then another batch, until they all run
together in the mind and it becomes hard to tell one from the other.”
But plagiarism—which the Oxford English Dictionary defines
in an old-fashioned way as “the wrongful appropriation or purloining,
and publication as one’s own, of the ideas, or the expression of
the ideas . . . of another,” and the Modern Language Association
more broadly describes as the failure “to give appropriate acknowledgment
when repeating another’s wording or particularly apt term, paraphrasing
another’s argument, or presenting another’s line of thinking”—is
only one of the ways in which the integrity of intellectual discourse
is routinely undermined at present.
In this regard, the egregious Ward Churchill
case is instructive in several respects. Its principal actor, formerly
Associate Chair of Ethnic Studies at the University of Colorado, was hired
and granted tenure apparently without having received the doctorate usually
required for such appointments. In advancing his academic career, Churchill
presented himself as both a member of the Cherokee Nation and a combat
veteran in Vietnam, but neither claim has been substantiated by public
records. Allegations of misconduct in research, plagiarism in both writing
and art, and an arrest record dogged him for years, but University of
Colorado authorities were compelled to undertake an investigation only
when, shortly after the events of 9/11, Churchill chose to characterize
the victims of the World Trade Center terrorist attack as “little
Eichmanns.” He later explained that he meant not the waiters and
janitors, but the corporate executives, the suits who were part of the
global capitalist machinery, and added “more 9/11’s are necessary.”
After predictable revulsion, and a firestorm in the media, the Chancellor
of the University concluded after an initial examination of Churchill’s
record that “allegations of research misconduct, related to plagiarism,
misuse of others’ work and fabrication, have sufficient merit to
warrant further inquiry.” Churchill made a vigorous counterattack,
raising the cry of “academic freedom” and professing his membership
in at least two authentic victim groups—Native Americans and Vietnam
combat veterans—and the Chancellor of the University subsequently
announced that Churchill’s public statements, no matter how “repugnant
and hurtful,” had been judged to be protected speech under the First
Amendment. Questions raised about “Professor Churchill’s possible
misrepresentation of his ethnicity in order to gain employment advantage”
had, according to the Chancellor, been reviewed in 1994, “resulting
in a finding of no action warranted.” As for a remaining cluster
of issues regarding “research misconduct and failure to meet standards
of professional integrity,” these were referred to “the Boulder
campus Standing Committee on Research Misconduct for further investigation.”
Additional charges in this case have been presented to that committee,
which has not yet reached a determination; nor is it clear what exactly
the consequences of a finding of serious impropriety might be. In the
meantime, though several of Churchill’s speaking appearances have
been cancelled and he has stepped down as head of the Ethnic Studies department,
he retains his tenured position at the University of Colorado and continues
to publish, teach, and speak to large and enthusiastic audiences willing
to pay his quite substantial fees.
With the case of Michael Bellesiles and
the tremendous uproar over his book Arming America, published
in 2000, we had been confronted with a different phenomenon: under the
appearance of disinterested scholarship, the conscious alteration of the
past and the manipulation of evidence to advance a cause or theory with
political relevance in the present. The Bellesiles thesis was fundamentally
that America in revolutionary times had little interest in or affection
for guns; that wide ownership of guns was a post–Civil War development,
and that the tradition surrounding the Second Amendment and citizens’
rights to bear arms is exaggerated and of recent invention. Bellesiles
asserted, on the basis of what appeared to be a definitive study of probate
records from 1765 to 1790, that only about 14 percent of American households
possessed guns, and that a well-armed and trained civilian militia at
that historical moment was simply a myth. Bellesiles further argued that
his research had been without a prior agenda and purely historical, pursued
without relevance to present-day debates over gun control or the Second
Amendment—though he surely knew that an allegedly gun-free America
in the era of the founders would have tremendous implications for the
heated controversy in our time over the right to bear arms. Not surprisingly,
virtually all reviewers were lavish in their praise for Arming America,
agreeing that Bellesiles had, despite his disclaimer regarding contemporary
relevance, made a signal contribution to the argument over the myth and
reality of American gun culture, and that his statistical findings were
a valuable and skilled demonstration of how quantitative evidence should
be used in the writing of history.
In 2001 Bellesiles was awarded the prestigious
Bancroft Prize in American History, and his book was widely sold, acclaimed
for its apparently conclusive proof that “guns and violence were
neither inevitable nor immutable features of American culture.”
Almost immediately, the author became the target of often vituperative
ad hominem criticism from the gun lobby and its many supporters; at the
same time, a growing crowd of suspicious amateur sleuths began to pick
apart his thesis. Websites and press reports initially exposed his manifest
ignorance of technical knowledge about eighteenth-century firearms, and
laid bare a disturbing pattern of sweeping and unfounded generalizations
throughout Arming America. And then both professional and amateur
historians discovered and publicized significant flaws in his evidence:
his vaunted statistics, the mainstay of his thesis, were mathematically
unlikely, and could not in any case be reconciled with the relevant archival
evidence he claimed to have consulted. Despite Bellesiles’s vigorous
self-defense and his claim that he had unfairly been singled out for criticism,
not least in the public sphere, he was ultimately compelled to resign
from his professorship of history at Emory University. In due course,
the Trustees of Columbia University stripped him of the Bancroft Prize,
and his publisher, Knopf, terminated his contract and ceased selling Arming
America—but not before the book had sold some sixteen thousand
copies in paperback and eight thousand in hardback, rare numbers for any
scholarly publication. It conclusively appeared that Bellesiles had been
so eager to challenge the reigning myths of American gun culture that
he deliberately distorted evidence to present a counter-myth of his own.
When caught out, he resorted to increasingly feeble and tendentious excuses,
and ultimately tangled himself in a self-woven web of embarrassing inconsistencies
and apparent lies. Nor, once professional historians were mobilized to
scrutinize his work, could Bellesiles plausibly present himself as the
distinguished academic historian victimized by ignorant and partisan cyberspace
warriors.
But this is America, where even the most
infamous wrongdoing is often forgiven, especially among the prominent,
where second acts and comeback appearances are routine and there is an
insatiable appetite for tales of sin and provisional redemption. This
is, above all, a country and time in which memories and attention spans
are notoriously short. In this general cultural context, the perhaps inevitable
rehabilitation of a stoutly unrepentant Bellesiles has taken several forms:
he has again been invited to publish book reviews in scholarly journals,
and he has signed a contract with a well-known university press to produce
a book on violence in America. In addition, Arming America remains
in print and available on Amazon.com and elsewhere; Soft Skull Press has
published a revised paperback with an exculpatory preface by Bellesiles,
allowing him to defend himself against accusations which he sees as focused
primarily on “three paragraphs and one table” in the original
edition. (The web advertising for the Soft Skull revision continues to
feature blurbs—from Kirkus, Booklist, and Garry Wills—referring
to the earlier Knopf edition.) Moreover, Bellesiles has become a regular
contributor to History News Service, a syndicate of historians who promote
each other’s published comments on the news “to improve the
public’s understanding of current events by setting these events
in their historical contexts.” Ranging far from American colonial
history, Bellesiles has now offered his reflections on, among other subjects,
American military policy in the Middle East and Congressional regulation
of Middle East studies. According to the co-director of History News Service,
Bellesiles “remains a historian” no matter what he did “and
was held responsible for doing.” But what exactly does “held
responsible” actually mean in the present dispensation?
That dispensation, the expression of our
regnant culture, as David Callahan reminds us in his bleak The Cheating
Culture, is rife with cheating of all kinds, at all levels of society,
which this author perceives to be dominated by a harsh social Darwinism
expressed in “a virulent new strain of consumerism.” Members
of what Callahan calls “The Winning Class” feel both individually
and collectively that they can with impunity do whatever it takes to become
a “star achiever,” whether that involves actual illegality
or merely bending the rules. Members of the inevitably much larger “Anxious
Class” look on with envy at the success of those who seem to have
grabbed it all: the graduates of the “right” college, who
go on to the “right” jobs and their commensurate large earnings
and correlative rewards. The resentful onlookers, crowding against the
glass wall or ceiling that bars them from the ostentatious consumption,
the gated communities, the many tempting perquisites of a winner-take-all
economy, are apt in their turn to cut corners, to jettison older notions
of community and integrity in the ceaseless race to get ahead of their
fellows. The academic world, with its happy few faculty superstars preening
in their mandarin status at the top and its army of underpaid adjuncts,
itinerant and sullen, at the bottom, mirrors the larger society with its
yawning chasm between winners and losers, its worship of “top producers”
and top earners, including some chief executives who appear to have insulated
themselves successfully from the ordinarily expected consequences of incompetence
or something resembling thuggery. Over time, the huge disparity in status
between the stars and the peons has worked to erode our notions of an
academic community, of the university as a place in which faculty and
students are animated by the same search for knowledge, bound together
by commonly accepted rules, obligations, and even a shared understanding
about appropriate forms of discourse and standards of evidence and reasoned
argument. It now seems that only power counts. Callahan ascribes the almost
universal problem of cheating primarily to the crushing influence of the
market on all aspects of our culture, a phenomenon that he dates to the
Reagan years, a time in which huge wealth quite suddenly accrued to individuals
and whole classes, who then successfully transmuted financial power into
significant political influence designed to consolidate their positions
at the top. A comparable set of developments appears to have taken place
in the academy, accompanied by a related—and contagious—insistence
on freedom from consequences.
Even the most zealous white-collar crime
prosecutor would concede that wrongdoing has quite different consequences
depending on the assets and social standing of the perpetrator. The wretched
defendant who steals a thousand dollars from a bank, most often to fund
a drug habit, when convicted is sentenced to significant penal time, sometimes
life if he’s a multiple offender, while the white-collar baddie
who steals millions, and can mobilize armies of high-paid defense lawyers,
may in the worst case be sentenced to a year or so at a Club Fed, often
in a convenient and salubrious location. In the academic world, however,
in which superior virtue is routinely assumed, it appears that cheating
is so endemic that it is ignored, or when exposed is treated with laxity,
though here, too, whatever punishments are meted out tend to be harsher
for more junior members of the hierarchy, and certainly more for students
than faculty. Tenure committees are inured to the cooked resumé,
the fudged attribution, the casual appropriation of the intellectual work
of others; among faculty only the most exorbitant lapses tend to attract
investigation or censure, and punishment seldom involves leaving the academy.
Most offenders, taking advantage of the ever-convenient threat of litigation,
manage to negotiate a whitewash of some variety that enables them to move
on unscathed to the next institution. The whole effort to detect and eliminate
faculty wrongdoing is indeed so embarrassing, time-consuming, and often
inconclusive that the American Historical Association decided, in 2003,
to cease accepting complaints of professional misconduct and to stop conducting
the formal adjudication of such cases. The Association justified its step
by announcing that its past efforts “have not had sufficient impact
either on the individuals involved in cases of misconduct, or on the profession
as a whole, or on the wider public.” The Association was in particular
prevented by its own confidentiality rules from dealing with precisely
the “highly visible cases that have proven so controversial in recent
years, thereby rendering the AHA almost irrelevant at precisely the time
when its intervention might have been most helpful.” Of course,
even while dropping its attempts to enforce a code of professional conduct,
the Association officially reassured its members that it would continue
in unspecified ways to pursue initiatives “to become a more aggressive
public advocate for high ethical standards in the practice of history.”
Within the larger culture of cheating, with
its credo that “everybody does it,” it is of course exceedingly
difficult to inculcate among students in high school and university level
courses the historical profession’s traditional detestation of plagiarism,
or of systematically falsifying evidence to justify a tendentious conclusion.
Though historians have always struggled over historical interpretations
of the “same” set of facts—think of the endless controversies
over, say, Germany’s responsibility for hostilities in 1914, or
the origins of the Cold War—there has until fairly recently been
some shared understanding within the profession of what is meant by persuasive
evidence and a concomitant agreement on how that evidence is to be marshaled
and cited to buttress historical judgments. But in recent years that shared
understanding has itself been under assault from several directions.
One is postmodern “critical”
theory, now the reigning orthodoxy among many university faculties. Spreading
from its European origins in fashionable, speculative literary criticism
and philosophical musing, postmodern theory, broadly conceived, posits
the notion that “truth,” or for that matter “virtue,”
have no objective meaning but serve merely to conceal and often justify
unequal power relations. Moreover, language itself is suspect: from the
convenient amnesia of Paul de Man and the playful jeux d’ésprit
of Jacques Derrida and his school, an entire fantastically elaborate
superstructure of thought has been erected, which many lesser minds have
elevated into the status of a cult. If any text presents only shifting,
provisional meanings meant to “disguise and facilitate the use of
power by those who have it, or seek it,” as Miriam Schulman has
written, then we are truly in the world of Lewis Carroll’s Red Queen,
where anybody can decree what integrity means, or whether it matters,
and all behaviors are equally valid, morally equivalent. In such a looking-glass
world, why should a historian, a scholar from any discipline, go through
the tedium of weighing evidence, accurately citing sources, even writing
his or her own material? If all texts can be parsed to reveal their “transgressive”
or subversive subtexts, and nothing means much of anything, it is the
plodding non-cheating scholar who may properly feel at a disadvantage.
One might almost summon up real sympathy for the dim graduate student,
struggling with multiple commitments, or the junior faculty person straining
to fulfill the requirement for ever more “publications,” who
decides that intellectual dishonesty may be the easiest way out of a real
or perceived dilemma.
In an academic environment in which language
may be held to mean whatever a particular reader arbitrarily decides,
it is not surprising that the notion of a discrete, unique event in time,
of something that actually happened and can be described and analyzed,
has itself become somewhat shadowy. In 1991, Simon Schama published to
considerable acclaim Dead Certainties (Unwarranted Speculations),
a book dealing with two indisputably real historical events: the
death of General James Wolfe at the Battle of Quebec in 1759, and the
sensational trial in 1850, in Boston, of Professor John Webster of Harvard
University for the murder of his colleague Dr. George Parkman. Schama,
a well-known historian with a substantial body of historical publication
to his credit, decided to present his book as “a work of the imagination
that chronicles historical events.” Although Dead Certainties
is based on primary sources that have been accurately listed—some
indeed, such as trial transcripts, quoted verbatim—Schama has also
chosen to interpolate passages of invention, of “purely imagined
fiction.” These are conjectural narratives “constructed from
a number of contemporary documents” and fictitious dialogues “worked
up” from Schama’s understanding of his sources and from his
free invention of “how such a scene might have taken place.”
This blithe approach to historical writing, which teeters on the brink
of infotainment, lies in that border area more appropriate perhaps to
Norman Mailer or Truman Capote, and seems fraught with problems for serious
historians. Schama’s blending of fact and fiction, his toying with
real events to produce what is in his view a more readable, dare one say
marketable, product, paves the way for others to play at the same speculative
game. But in the hands of those not willing to do the hard archival slog,
those incapable of reading and interpreting, say, eighteenth- and nineteenth-century
letters and journals, what is to prevent the substantial manipulation
of evidence, its refashioning to fit some preexisting thesis, or perhaps
its mere exaggeration for the sake of drama? Simon Schama had by his own
admission no more sinister aim in view than creating an amalgam of fact
and fiction that would engage his readers; but by so doing he has helped
to erode the idea of objective truth and contributed, however unwittingly,
to our vast contemporary confusion—daily demonstrated in films by
Oliver Stone and others, and in television and radio programs—about
where the verifiable ends and show business begins.
In the hands of less sophisticated historians
than, say, Niall Ferguson (author of a 1999 book entitled Virtual
History: Alternatives and Counterfactuals), there are also dangers
in the deliberate playing with “alternatives and counterfactuals,”
the absorbing game of “what if” as applied to real past events.
Asking “what if Communism had not collapsed?” or “what
if there had been no American Revolution?” is certainly entertaining
and has its didactic uses, but what if it’s indulged in by those
who have little background in either the evolution of Communism in Europe
or the complex history of the American colonies and their relationship
to Great Britain? It is a truism that there are many varieties of history;
that we imagine and revisit the past freighted with our individual prejudices,
beliefs, emotions; that it often helps us understand the past if we ask
whether there might have been alternatives to what actually happened.
But if such speculative exercises are to be intellectually defensible,
it is essential that they be engaged in by historians who are truly expert
in their subjects, who have a thorough knowledge of sources and contexts.
Ferguson indeed would have those embarking on the counterfactual exercise
consider “only those alternatives which we can show on the basis
of contemporary evidence that contemporaries actually considered.”
That requirement presupposes that contemporary evidence—what can
be gleaned from a wide range of original sources—is known to those
historians who may then wish to ask “what if?” The underpinnings,
the assumptions of the speculation have to be “more than merely
imaginary or fanciful,” Ferguson concedes, lest the entire effort
fly off into the wild blue yonder of baseless fiction. But that sustained
intellectual exercise requires the historical discipline of both knowledge
and—above all—careful method in the evaluation of evidence.
The acquisition of such discipline is not fostered by invitations at the
apprentice stage to idle speculation and the rush to uninformed and unsubstantiated
opinion.
Academic integrity cannot be coerced, nor
can it be taught except by example and the rigorous inculcation of a code
of honor that is then internalized. The very formulation sounds irredeemably
quaint to us now—but in a time when we hear endless bleating about
“values” it is perhaps salutary to be reminded of one that
is seldom mentioned: elementary decency. Or, for those who prefer to recall
one of the Ten Commandments: “Thou shalt not steal.” There
is, it seems to me, another vital factor affecting the prospects for academic
integrity, and it was, characteristically, Jacques Barzun who brought
it into focus with his usual rigor, in The House of Intellect,
published in 1959. He wrote: “The schools are thus as fully the
product of our politics, business, and public opinion as these are the
products of our schools. It is because the link is so close that the schools
are so hard to change.” There were no Deans for Academic Integrity
in 1959; and though of course there were disreputable scholars and students
then as now, they were not quite as brazen as their present-day descendants,
nor did they have at their disposal a comparable array of advanced technological
tools to facilitate scholarly wrongdoing. Our schools and universities
mirror the larger society, and even the best of them, again in Barzun’s
words, “can be no more than half-innocent.” Misprision, a
wrong action or omission, has about it an appropriate criminal taint;
but the term is derived in the end from the Old French mesprendre,
to mistake, or act wrongly. Error, mistake, the failure to anticipate
consequences, may still in theory be prevented. All it will take is an
appropriate education.
|