Essays

Web of lies? Historical knowledge on the Internet

by Daniel J. Cohen and Roy Rosenzweig

December 2005

Archives, Scholarship

This essay originally appeared in First Monday, December 2005.on.

Abstract: Scholars in history (as well as other fields in the humanities) have generally taken a dim view of the state of knowledge on the Web, pointing to the many inaccuracies on Web pages written by amateurs. A new software agent called H-Bot scans the Web for historical facts, and shows how the Web may indeed include many such inaccuracies–while at the same time being extremely accurate when assessed as a whole through statistical means that are alien to the discipline of history. These mathematical methods and other algorithms drawn from the computational sciences also suggest new techniques for historical research and new approaches to teaching history in an age in which an increasingly significant portion of the past has been digitized.

 


Introduction

In the spring of 2004 when the New York Times decided to offer an assessment of the social and cultural significance of Google (then heading toward its highly successful IPO), it provided the usual sampling of enthusiasts and skeptics. The enthusiasts gushed about the remarkable and serendipitous discoveries made possible by Google’s efficient search of billions of Web pages. Robert McLaughlin described how he tracked down five left-handed guitars robbed from his apartment complex. Orey Steinmann talked about locating the father he had been stolen from (by his mother). And a New York City woman unearthed an outstanding arrest warrant for a man who was courting her.

Perhaps inevitably the dour skeptics hailed from the academic and scholarly world–and particularly from one of its most traditional disciplines, history. Bard College President Leon Botstein (PhD, Harvard, 1985) told the Times that a Google search of the Web “overwhelms you with too much information, much of which is hopelessly unreliable or beside the point. It’s like looking for a lost ring in a vacuum bag. What you end up with mostly are bagel crumbs and dirt.” He cautioned that finding “it on Google doesn’t make it right.” Fellow historian and Librarian of Congress James Billington (PhD, Oxford, 1953) nodded in sober agreement: “far too often, it is a gateway to illiterate chatter, propaganda and blasts of unintelligible material” (Hochman, 2004).

Botstein and Billington were just the latest in a long line of historians who have viewed the Internet with substantial skepticism. In November 1996, for example, British historian Gertrude Himmelfarb offered what she called a “neo-Luddite” critique of the then relatively young Web. She told readers of the Chronicle of Higher Education that she was “disturbed by some aspects of . . . the new technology’s impact on learning and scholarship.” “Like postmodernism,” she complained, “the Internet does not distinguish between the true and the false, the important and the trivial, the enduring and the ephemeral.” Internet search engines “will produce a comic strip or advertising slogan as readily as a quotation from the Bible or Shakespeare. Every source appearing on the screen has the same weight and credibility as every other; no authority is ‘privileged’ over any other” (Himmelfarb, 1996).

Five years later, in a Journal of American History round table on teaching American history survey courses, several participants expressed similar reservations. “Luddite that I am, I do not use Web sites or other on-line sources, in part because I’m not up on them, but also because I’m old-fashioned enough to believe that there is no substitute for a thick book and an overstuffed chair,” Le Moyne College professor Douglas Egerton confessed. He harbored serious concerns about the effect of the new medium on his students’ historical literacy: “Many of my sophomores cannot distinguish between a legitimate Web site that has legitimate primary documents or reprinted (refereed) articles and pop history sites or chat rooms where the wildest conspiracies are transformed into reality.” Elizabeth Perry, a professor at St. Louis University and non-Luddite, said that she thought that “the Internet can be a wonderful resource” for historical materials. But like Egerton she had major doubts about using it in the classroom: “I find that students do not use [the Web] wisely. They accept a great deal of what they see uncritically . . . [and] when they can’t find something on the Web, they often decide that it doesn’t exist.” Sensing that the Web is filled with inaccurate “pop” history that pulls students away from the rigorous historical truth found in “thick books,” only two of the 11 participants in the Journal of American History round table admitted to using Web resources regularly in their history surveys, and one of those professors stuck with the textbook publisher’s Web site rather than venturing out to the wilds of the broader Web (Kornblith and Lasser, 2001). This sample of professional historians is not atypical; in a recent study, a scant 6 percent of instructors of American history survey courses put links to the Web on their online syllabi (beyond perfunctory links to official textbook Web sites) (Cohen, 2005, p. 1408).

Is this skepticism merited? What is the quality and accuracy of historical information on the Web? With Google now indexing more than eight billion pages, a full qualitative assessment of historical information and writing on the Web is well beyond the ability on any person or even team of people. It is, in fact, akin to proposing to assess all the historical works in Billington’s own Library of Congress. Faced with that fool’s errand, we take a very different approach to assessing the quality of historical information on Web, one that relies on two of its most distinctive qualities–its massive scale and the way that its contents can be rapidly scanned and sorted. These are the qualities that are central to Google’s extraordinary success as a swift locator of people and information. And, in fact, we employ Google as our indispensable assistant for assessing the veracity of the Web’s historical information. We also argue, somewhat counterintuitively, that Botstein may be right about the Web containing a lot of dust and bagel crumbs, while at the same time being wrong in his overall claim about the Web’s unreliability.

Equally important, we seek to show that our approach suggests in some relatively primitive ways the possibilities for the automatic discovery of historical knowledge, possibilities that historians have tended to discount after their brief flirtation with quantitative history in the 1970s. We conclude with some speculations on the larger implications for historical research and teaching of these claims about the historical reliability of the Web and these methods for demonstrating its reliability. The rise of the Web and the emergence of automated methods for mining historical knowledge digitally is more important for how it may change teaching and research tomorrow than for the gems it may allow us to find among the bagel crumbs today.

 

How Computer Scientists Differ from Humanists in their View of the Web

 

The enormous scale and linked nature of the Web–an unprecedented development–makes it possible for the Web to be “right” in the aggregate while sometimes very wrong on specific pages. This is actually a pragmatic understanding of the Web that underlies much recent work by computer scientists (including those at Google) who are trying to forge a trustworthy resource for information out of the immense chaos of billions of heterogeneous electronic documents. “The rise of the world-wide-web has enticed millions of users,” observe Paul Vitanyi and Rudi Cilibrasi of the Centrum voor Wiskunde en Informatica in the Netherlands, “to type in trillions of characters to create billions of web pages of on average low quality contents.” Yet, they continue, “the sheer mass of the information available about almost every conceivable topic makes it likely that extremes will cancel and the majority or average is meaningful in a low-quality approximate sense” (Vitanyi and Cilibrasi, 2005). In other words, while the Web includes many poorly written passages, often uploaded by unreliable or fringe characters, taken as a whole the medium actually does quite a good job encoding accurate, meaningful data. Critics like Himmelfarb and Billington point to specific trees (Web pages) that seem to be ailing or growing in bizarre directions; here we would like to join computer scientists like Vitanyi and Cilibrasi in emphasizing the overall health of the vast forest (the World Wide Web in general). Moreover, we agree with a second principle of information theory that underlies this work: As the Web grows, it will become (again, taken as a whole) an increasingly accurate transcription of human knowledge.

Perhaps we should not be surprised at this divide between the humanist skeptics and the sanguine information scientists. The former are used to analyzing carefully individual pieces of evidence (like the accuracy of a single folio in an archive) while the latter are used to divining the ties, relationships, and overlaps among huge sets of documents. Computer scientists specialize in areas such as “data mining” (finding statistical trends), “information retrieval” (extracting specific bits of text or data), and “reputational systems” (determining reliable documents or actors), all of which presuppose large corpuses on which to subject algorithms. Despite a few significant, though short-lived, collective affairs with databases and quantitative social science methods, most contemporary scholars in the humanities generally believe that meaning is best derived by an individual reader (or viewer in the case of visual evidence), and expressed in prose rather than the numbers algorithms produce. Computer scientists use digital technologies to find meaningful patterns rapidly and often without human intervention; humanists believe that their disciplines require a human mind to discern and describe such meaning.

Indeed, Vitanyi and Cilibrasi largely make their case about the “meaningful” nature of the Web using some abstruse mathematical analyses. But they also present a few examples that laypeople can understand, and might find surprising. For instance, by feeding just the titles of fifteen paintings into Google, Vitanyi and Cilibrasi’s computer program was able to sort these works very accurately into groups that corresponded with their different painters–in this case, one of three seventeenth-century Dutch artists: Rembrandt van Rijn, Jan Steen, or Ferdinand Bol. The program, and the search engine it used for its data, Google, obviously did not have eyes for the fine distinctions between the brush styles of these masters; instead, through some swift mathematical calculations about how frequently these titles showed up on the same pages together (common “hits” on Google) the program calculated that certain titles appeared “closer” than others in the universe of the Web. Thus, Vitanyi and Cilibrasi theorized (based on some statistical principles of information that predate the Internet) that the same artist painted these “close” titles. Even if some Web pages had titles from more than one painter (a very common occurrence), or, more troublingly (and perhaps also common on the Web), some Web pages erroneously claimed that Rembrandt painted “Venus and Adonis” when in fact it was Bol, the overall average in an enormous corpus of Web pages on seventeenth-century Dutch painting is more than good enough to provide the correct answer.

Does the same hold true for history? Is the average of all historical Web pages “meaningful” and accurate? To answer this question, one of us (Dan) developed an automated historical fact finder called “H-Bot” beginning in spring 2004, which is available in an early release on the Center for History and New Media (CHNM) Web site at http://chnm.gmu.edu/tools/h-bot. (Computer scientists call software agents that scan the Internet “bots”; the “H,” of course, is for “history.”) H-Bot is a more specialized version of the information retrieval software that database and Internet companies as well as the United States government have been feverishly working on to automatically answer questions that would have required enormous manual sifting of documents in the pre-digital era.1 (Yes, many of these applications have import for intelligence units like the CIA and NSA.) Although by default the publicly available version looks at “trusted sources” such as encyclopedias first (a method now used by Google, MSN, and other search engines in their recent attempts to reliably answer questions directly rather than referring users to relevant Web pages), we also have a version that relies purely on a statistical analysis of the Google index to answer historical queries. In other words, we have been able to use H-Bot to directly assess whether we can extract accurate historical information from the dust and bagel crumbs of the Web. And understanding how H-Bot works, as well as the strengths and weaknesses of its methodology, provides a number of important insights into the nature of online knowledge.

 

H-Bot, a History Software Agent

 

Suppose you were curious to know when the French impressionist Claude Monet moved to Giverny, the small village about forty-five miles west of Paris where he would eventually paint many of his most important works, including his famous images of water lilies. Asking H-Bot “When did Monet move to Giverny?” (in its “pure” mode where it does not simply try to find a reliable encyclopedia entry on the matter) would prompt the software to query Google for Web pages from its vast index that include the words “Monet,” “moved,” and “Giverny”–approximately 6,200 pages in April 2005. H-Bot would then scan the highest ranking of these pages–that is, the same ones that an uninformed student is likely to look at when completing an assignment on Monet–as a single mass of raw text about Monet. Breaking these Web pages apart into individual words, it would look in particular for words that look like years (i.e., positive three- and four-digit numbers), and indeed it would find many instances of “1840” and “1926” (Monet’s birth and death years, which appear on most biographical pages about the artist). But most of all it would find a statistically indicative spike around “1883.” As it turns out, 1883 is precisely the year that Monet moved to Giverny.

Out of the pages it looked at, H-Bot found some winners and some losers. In fact, a few Web pages at the very top of the results on Google–ostensibly the most “reputable” pages according to the search engine’s ranking scheme–get the Giverny information wrong, just as the Web’s detractors fear. In this case, however, some of the incorrect historical information does not come from shady, anonymous Internet authors posing as reliable art historians. A page on the Web site of the highly respectable Art Institute of Chicago–the twelfth page H-Bot scanned to answer “When did Monet move to Giverny?”–erroneously claims that “Monet moved to Giverny almost twenty years” after he left Argenteuil in 1874 (Art Institute of Chicago, 2005). To be fair to the Art Institute, another of their Web pages that ranked higher than this page gets the date right. But so do the vast majority (well over 95 percent) of the pages H-Bot looked at for this query. In the top thirty Google results, the official Web site of the village of Giverny, the French Academie des Beaux-Arts, and a page from the University of North Carolina at Pembroke correctly specify 1883; so does the democratic (and some would say preposterously anarchical) Web reference site Wikitravel, the Australian travel portal The Great Outdoors, and CenNet’s “New Horizons for the Over 50s” lifestyle and dating site (Giverny, 2005; Academie Des Beaux-Arts, 2005; University of North Carolina at Pembroke, 2005; Wikitravel, 2005; The Great Outdoors, 2005; CenNet, 2005). Through a combination of this motley collection of sites (some might say comically so), H-Bot accurately answers the user’s historical question. The Web’s correct historical information overwhelms its infirmities. Moreover, the combinatorial method of the software enables it to neutralize the problems that arise in a regular Google search that focuses on the first few entries in the hope that a randomly selected highly-ranked link will provide the correct answer (otherwise known as the lazy student method).

Right now H-Bot can only answer questions for which the responses are dates or simple definitions of the sort you would find in the glossary of a history textbook. For example, H-Bot is fairly good at responding to queries such as “What was the gold standard?”, “Who was Lao-Tse?”, “When did Charles Lindbergh fly to Paris?”, and “When was Nelson Mandela born?” The software can also answer, with a lower degree of success, more difficult “who” questions such as “Who discovered nitrogen?” It cannot currently answer questions that begin with “how” or “where,” or (unsurprisingly) the most interpretive of all historical queries, “why.” In the future, however, H-Bot should be able to answer more difficult types of questions as well as address the more complicated problem of disambiguation–that is, telling apart a question about Charles V the Holy Roman Emperor (1500-1558) from one about Charles V the French king (1338-1380). To be sure, H-Bot is a work in progress, a young student eager to learn. But given that its main programming has been done without an extensive commitment of time or resources by a history professor and a (very talented) high-school student, Simon Kornblith, rather than a team of engineers at Google or MIT, and given that a greater investment would undoubtedly increase H-Bot’s accuracy, one suspects that the software’s underlying principles are indicative of the promise of the Web as a storehouse of information.

Looking not merely at the answers H-Bot provides but at its “deliberations” (if we can anthropomorphically call them that) provides further insights into the nature of historical knowledge of the Web. Given its enormity compared to even a large printed reference work like the Encyclopedia Britannica, the writing about the past on the Web essentially functions as a giant thesaurus, with a wide variety of slightly different ways of saying the same thing about a historical event. For instance, H-Bot currently doesn’t get “When did War Admiral lose to Seabiscuit?” right, but it does get “When did Seabiscuit defeat War Admiral?” correct. It turns out that Web authors have thus far chosen the latter construction–”Seabiscuit defeated War Admiral”–but not the former construction to discuss the famous horseracing event of 1938. Of course, given the exponential growth of the Web and the recent feverish interest in Seabiscuit, H-Bot will likely get “When did War Admiral lose to Seabiscuit?” right in the near future, perhaps as soon as a high school student doing a report on the Great Depression writes the phrase “War Admiral lost to Seabiscuit in 1938” and posts it to his class’s modest Web site.

 

Testing H-Bot and the Web

 

H-Bot is still an unfunded research project in beta release rather than a fully developed tool. Yet even in its infancy, it is still remarkably good at answering historical questions, and its accuracy is directly tied to the accuracy of historical information on the Web. Furthermore, as Vitanyi and Cilibrasi speculate based on the mathematics of information theory, as the Web grows it will become increasingly accurate (as a whole) about more and more topics. But how good is it right now in comparison to an edited historical reference work?

We first conducted a basic test of H-Bot against the information in The Reader’s Companion to American History, a well-respected encyclopedic guide edited by the prominent historians Eric Foner and John A. Garraty (Foner and Garraty, 1991). Taking the first and second biographies for each letter of the alphabet (except “X,” for which there were no names), we asked H-Bot for the birth year of the first figure and the death year of the second figure–perhaps some of the most simple and straightforward questions one can ask of the software. For 48 of these 50 questions H-Bot gave the same answer as The Reader’s Companion, for an extremely competent score of 96 percent. And, a closer look at the two divergent results indicates that H-Bot is actually even more accurate than that.

H-Bot’s first problem came in answering the question “When did David Walker die?” Rather than offer the answer provided by The Reader’s Companion (1830), it replies politely “I’m sorry. I cannot provide any information on that. Please check your spelling or rephrase your query and try again.” But no rewording helps since H-Bot has no way of distinguishing the abolitionist David Walker (the one listed in The Reader’s Companion) from various other “David Walkers” who show up in the top-ranking hits in a Google search–the Web designer, the astronaut, the computer scientist at Cardiff University, the Web development librarian, and the Princeton computer scientist. This disambiguation problem is one of the thorniest issues of information retrieval and data mining in computer science. Perhaps because he is familiar with the problem, the Cardiff David Walker even offers his own disambiguation Web page in which he explains “There are many people called David Walker in the world” and notes that he is not the Princeton David Walker nor yet a third computer scientist named “David Walker” (this one at Oxford) (Walker, 2005). Despite the importance of the disambiguation problem, it is not a problem of the quality of historical information on the Web but rather of the sophistication of the tools–like H-Bot–for mining that information.

The other answer H-Bot gave that differed from The Reader’s Companion is more revealing in assessing the quality of historical information online. The first name in “H” was Alexander Hamilton, so we asked the software “When was Alexander Hamilton born?” It answered 1757, two years later than the date given in The Reader’s Companion. But behind the scenes H-Bot rang its hands over that other year, 1755. Statistically put, the software saw a number of possible years on Web pages about Hamilton, but on pages that discuss his birth there were two particularly tall spikes around the numbers 1755 and 1757, with the latter being slightly taller among the highest-ranked pages on the Web (and thus H-Bot’s given answer).

Although one might think that the birth year of one of America’s Founders would be a simple and unchanging fact, recent historical research has challenged the commonly mentioned year of 1755 used in the 1991 Reader’s Companion by the well-known historian Edward Countryman, the author of the Hamilton profile. Indeed, writing just under a decade later in American National Biography the equally well-known historian Forrest McDonald explains “the year of birth is often given as 1755, but the evidence more strongly supports 1757.” A recent exhibit at the New-York Historical Society (N-YHS) decided on the later year as well, as have many historians–and also thousands of Web articles on Hamilton (McDonald, 2000; New-York Historical Society, 2005). Indeed, newer Web pages being posted online more often use 1757 than 1755 (including the Web site for N-YHS’s exhibit), providing H-Bot with an up-to-the-minute “feeling” of the historical consensus on the matter (see Figure 1). This sense based on statistics also means that H-Bot is more up-to-date than The Reader’s Companion, and with the Web’s constant updating the software will always be so compared to any such printed work. (The same constant updating as well as continual changes in Google’s index means that some of the queries we discuss here may produce different results at different times.)

 

 

Graph indicating Years Mentioned on the Top Thirty Pages in Google's Index on Alexander Hamilton's Birth

 

Figure 1: Years Mentioned on the Top Thirty Pages in Google’s Index on Alexander Hamilton’s Birth

 

Another, perhaps more accurate way of understanding H-Bot is to say that the software is concerned not with what most people would call “facts” but rather with consensus. The Web functions for its software as a vast chamber of discussion about the past. This allows H-Bot to be in one way less authoritative but in another way–as in the case of Hamilton’s birth year–more flexible and current. Such “floating” facts are more common in history (and indeed the humanities in general) than nonprofessionals might suspect, especially as one goes further back into the dark recesses of the past. Professional historians have revised the commonly accepted birth year of Genghis Khan several times in the past century; even more recent subjects, such as Louis Armstrong and Billie Holiday, have “facts” such as their birth years under dispute.

The same algorithms, however, that allow H-Bot to swiftly and accurately answer questions such as birth and death years reveal a weakness in the software. Take, for instance, what happens if you ask H-Bot when aliens landed in Roswell or when Stalin was poisoned, two common rumors in history. H-Bot “correctly” answers these questions as 1947 and 1953, respectively. It statistically analyzes the many Web pages that discuss these topics (a remarkable quarter-million pages in the case of the Roswell aliens) to get the agreed-upon years it returns as answers. In these cases, the “wisdom of crowds” turns into the “madness of crowds” (Surowiecki, 2004).

Conceivably this infirmity in H-Bot’s algorithms–we might call it historical gullibility or naiveté–could be remedied with more programming resources. For instance, it may be that when a topic is discussed on many Web pages that end in .com but not on many pages that end in .edu (compared to the relative frequency of those top-level domains on the Web in general), the program could raise a flag of suspicion. Or H-Bot could quickly analyze the grade level of the writing on each Web page it scanned and factor that into its “reliability” mathematics. Both of these methods have the potential to re-inject a measure of anti-democratic elitism that the Web’s critics see as missing in the new medium. Further research into this question of rumor versus fact would not only benefit H-Bot but also our understanding of the popular expression of history on the Web.

More broadly, however, the problem reflects the reality that even the “facts” in history can be a matter of contention. After all, lots of people genuinely and passionately believe that aliens did land at Roswell in 1947. And, a very recent scholarly book argues that Stalin was poisoned in 1953 (Brent and Naumov, 2003). If that becomes the accepted view, the Web is well positioned to pick up the shifting consensus more quickly than established sources. This also reminds us of the ways that the Web challenges our notions of historical “consensus”–broadening debates from the credentialed precincts of the American Historical Association’s Annual Meeting (and its publications) to the rough and tumble popular debates that occur online.

Following its duel with The Reader’s Companion to American History, we then fed into H-Bot’s software a larger and more varied list of historical questions, though still with the goal of seeing if it could correctly answer the year in which the named event occurred. For this test we used an edition of The Timetables of History, a popular book that features a foreword by Daniel J. Boorstin, the former Librarian of Congress (Grun, 1982). The first event listed in every third year from 1670 to 1970 was rephrased as a question, and then fed into H-Bot. The questions began with the now somewhat obscure Treaty of Dover (1670) and the Holy Roman Emperor Leopold I’s declaration of war on France (1673) and ended with the dates for the Six-Day War (1967) and the shooting of students by the National Guard at Kent State University (1970). Although skewed more toward American and European history, many of the 100 questions covered other parts of the globe.

H-Bot performed respectably on this test, but did less well than on the test of major American figures, correctly providing the year listed in the reference book for 74 of the 100 events. The software could not come up with an answer for three of the 100 questions, and was wildly wrong on another five. But it came within one year of the Timetables entry for another eight answers, within two years on one entry, and was within fifty years for another nine (mostly, in those cases, choosing a birth or death year rather than the year of an event that a historical actor participated in). A more generous assessment might therefore give H-Bot 83 percent, a respectable B or B-, on this test, if one included the nine near misses but excluded the nine distant misses.

H-Bot’s difficulties with these nine distant misses and the five completely wrong answers were again due, in this case almost entirely, to disambiguation issues–for example, the ability to understand when a question about Frederick III refers to the Holy Roman Emperor (1415-93), the Elector of Saxony (1463-1525), or the King of Denmark and Norway (1609-1670). Many of the questions we gave to H-Bot unsurprisingly dealt with royalty whose names were far from distinctive, especially in European history. Through an added feedback mechanism–responding to a query with “Do you mean Frederick III the Holy Roman Emperor, the Elector of Saxony, or the King of Denmark and Norway?”–H-Bot could tackle the problem of disambiguation, which seems to have flummoxed the software in those eight cases ranging from Leopold I’s declaration of war on France in 1673 to Tewfik Pasha’s death in 1892. (The expansive, populist online encyclopedia Wikipedia helpfully has “disambiguation pages,” often beginning with a variant of “There are more than one of these. . .,” to direct confused researchers to the entry they are looking for; the reference site Answers.com uses these pages to send researchers in the correct direction rather than rashly providing an incorrect answer as H-Bot currently does.) A second, lesser problem stems from the scarcity of Web pages on some historical topics, and as we noted earlier, this problem may solve itself. As the Web continues its exponential growth and the topics written on it by professional and amateur historians proliferate, the software will find more relevant pages on, say, late seventeenth-century politics, than it is currently able to do.

Even with these caveats, one might in common sense turn off the computer and head for the reference bookshelf to find hard facts if the best H-Bot can do is provide three precisely correct answers for every four questions. Yet this test of H-Bot also revealed significant infirmities with reference works like The Timetables of History–infirmities that are less obvious to the reader than to the online researcher using H-Bot, which is able to instantly “show its work” by identifying the relevant Web pages it looked at to answer a query. Most people consider reference books perfect, but we should remember what our fourth-grade teachers told us: Don’t believe everything you read. Timetables suffers from disambiguation problems as well, albeit in a less noticeable way than H-Bot. For instance, in its 1730 entry, the book says that “Ashraf” was killed. But which Ashraf? Wikipedia lists no fewer than twenty-five Ashrafs, mostly Egyptian sultans but also more recent figures such as Yasser Arafat’s doctor. This 1730 entry, like many others in Timetables, doesn’t really help the ignorant user.

More seriously, although currently better than H-Bot, Timetables incorrectly identified at least four “facts” wrong in the test, albeit in minor ways. The reference work claims that the state of Delaware separated from Pennsylvania in 1703; it actually gained its own independent legislature in 1704. Great Britain acquired Hong Kong as a colony in 1842 (the Treaty of Nanking following the First Opium War), not 1841. Kara Mustafa, the Ottoman military leader, died in 1683, not 1691 as the book claims. Timetables also claims that Frederick III of Prussia was crowned as king of that German state in 1688, but he was really crowned as Elector of Brandenburg in that year, only later becoming king of Prussia in 1701–the year correctly given by H-Bot. These are lesser errors, perhaps, than the gross errors H-Bot sometimes makes. Still, it reminds us that getting all the facts right is more difficult than it looks. “People don’t realize how hard it is to nail the simplest things,” Lars Mahinske, a senior researcher for the authoritative Encyclopedia Britannica confessed to a reporter in 1990 (McCarthy, 1990). But, unlike Britannica (or at least its traditional print form), H-Bot, as we have noted, can be self-correcting whereas a print book is forever fixed, errors and all.

 

Factualist History: H-Bot takes the National Assessment History Exam and Acquires Some New Talents

 

Some might object that H-Bot is not a true “historian” and that it can only answer narrowly factual questions that serious historians regard as trivial. But before we dismiss H-Bot as an idiot savant, we should observe that these are precisely the kinds of questions that policy makers and educators use as benchmarks for assessing the state of student (and public) “ignorance” about the past–the kinds of questions that fill “Standards of Learning” (SOL) and National Assessment of Educational Progress (NAEP) tests.

We decided to have a special multiple-choice test-taking version of H-Bot (currently unavailable to the public) take the NAEP United States History examination to see if it could pass. (The NAEP exam actually consists of both multiple-choice questions and short answer questions; since H-Bot has not yet become as responsive and loquacious as 2001: A Space Odyssey‘s HAL 9000 we only had it take the multiple-choice questions.) This test-taking version of the software works by many of the same principles as the historical question-answering version, but it adds another key technique (“normalized information distance”) from the computer scientists’ toolkit that, as will be explained below, opens up some further possibilities for the automatic extraction of historical information. It is the use of “normalized information distance,” in fact, that enables Vitanyi and Cilibrasi to automatically associate seventeenth-century Dutch paintings with their painters without actually viewing the works.

The test-taking H-Bot gains a significant advantage that human beings also have when they take a multiple-choice exam–and which make these sorts of tests less than ideal assessments of real historical knowledge. In short, unlike the open-ended question “When did Monet move to Giverny?”, multiple-choice questions obviously specify just three, four, or five possible responses. By restricting the answers to these possibilities, multiple-choice questions provide a circumscribed realm of information where subtle clues in both the question and the few answers allow shrewd test-takers to make helpful associations and rule out certain answers (test preparation companies like Stanley Kaplan and Princeton Review have known this for decades and use it to full advantage). This “gaming” of a question can occur even when the test-taker doesn’t know the correct answer and is not entirely familiar with the historical subject matter involved. But even with this inherent flaw, the questions on the NAEP can be less than straightforward and reflect a much harder assignment for H-Bot, and one that challenges even the enormous database of historical writing on the Web. To be sure, some of the questions are of a simplistic type that the regular H-Bot excels at: a one word answer, for instance, to a straightforward historical question. For example, one sample question from the NAEP fourth-grade examination in American history asks how “Most people in the southern colonies made their living,” with the answer obviously being “farming” (rather than “fishing,” “shipbuilding,” or “iron mining”). H-Bot quickly derives this answer from the plethora of Web pages about the antebellum South by noting that this word appears far more often than the other choices on pages that mention “southern” and “colonies.”

More generally, and especially in the case of more complicated questions and multiword answers, the H-Bot exam-taking software tries to figure out how closely related the significant words in the question are to the significant words in each possible answer. H-Bot first breaks down the question and possible responses into their most significant words, where significance is calculated (fairly crudely, but effectively, in a manner similar to Amazon.com’s “statistically improbable phrases” for books) based on how infrequently a word appears on the Web. Given the question “Who helped to start the boycott of the Montgomery bus system by refusing to give up her seat on a segregated bus?” H-Bot quickly determines that the word “bus” is fairly common on the Web (appearing about 73 million times in Google’s index) while the word ” montgomery” is less so (about 20 million instances of the many different meanings of the word). Similarly, in possible answer (c), “Rosa Parks,” “parks” is found many places on the Web while ” rosa” appears on only a third as many pages. Having two uncommon words such as “montgomery” and “rosa” show up on a number of Web pages together seems even more unusual–and thus of some significance. Indeed, that is how H-Bot figures out that the woman who refused to go to the back of the bus, sparking the Montgomery, Alabama boycott and a new phase of the civil rights movement was none other than Rosa Parks (rather than one of the three other possible answers, “Phyllis Wheatley,” “Mary McLeod Bethune,” or “Shirley Chisholm”).

As computer scientists Vitanyi and Cilibrasi would say, this high coincidence of “rosa” and “montgomery” in the gigantic corpus of the Web means that they have a small “normalized information distance,” an algorithmic measure of closeness of meaning (or perhaps more accurately, a measure of the lack of randomness in these words’ coincidence on the Web).2 In more humanistic terms, assuming that Google’s index of billions of Web pages encodes relatively well the totality of human knowledge, this particular set of significant words in the right answer seem more closely related to those in the question than the significant words in the other, incorrect answers.

H-Bot can actually assess the normalized information distance between sets of words, not just individual words, in the question and possible answers, increasing its ability to guess correctly using Vitanyi and Cilibrasi’s “web pages of on average low quality contents.” These algorithms combined with the massive corpus of the Web allows the software to swiftly answer questions on the NAEP that supposedly invoke the higher-order processes of historical thinking, and that should be answerable only if you truly understand the subject matter and are able to reason about the past. For example, a NAEP question asks “What is the main reason the Pilgrims and Puritans came to America?” and provides the following options:

(a) To practice their religion freely

(b) To make more money and live a better life
(c) To build a democratic government
(d) To expand the lands controlled by the king of England

H-Bot cannot understand the principles of religious freedom, personal striving, political systems, or imperialism. But it need not comprehend these concepts to respond correctly. Instead, to answer the seemingly abstract question about the Puritans and Pilgrims and why they came to America, H-Bot found that Web pages on which words like “Puritan” and “Pilgrim” appear contain the words “religion” and (religious) “practice” more often than words like “money,” “democratic,” or “expand” and “lands.” (To be more precise, in this case H-Bot’s algorithms actually compare the normal frequency of these words on the Web with the frequency of these words on relevant pages, therefore discounting the appearance of “money” on many pages with “Puritan” and “Pilgrim” because “money” appears on over 280 million Web pages, or nearly one out of every 28 Web pages.) H-Bot thus correctly surmises that the answer is (a). Again, using the mathematics of normalized information distance the software need not find pages that specifically discuss the seventeenth-century exodus from England or that contain an obvious sentence such as “The Puritans came to America to practice their religion more freely.” Using its algorithms on various sets of words it can divine that certain combinations of rare words are more likely than others. It senses that both religion and freedom had a lot to do with the history of the Pilgrims and Puritans.

Or perhaps we should again be a little more careful (or some might say more cynical) and say that this assessment of the centrality of religious liberty is the reigning historical interpretation of the Pilgrims and Puritans, rather than a hard and uncontroversial fact supported by thousands of Web pages. While most amateur and professional historians would certainly agree with this assessment, we could find others who would disagree, advancing economic, political, or imperialistic rationales for the legitimacy of answers (b), (c), or (d). Yet these voices are overwhelmed on the Web by those who hew closely to the textbook account of the seventeenth-century British emigration to the American colonies. Undoubtedly Gertrude Himmelfarb and like-minded conservatives would be pleased with this online triumph of consensus over interpreters of the past who dare to use Marxist lenses to envision the founding of the United States.3 But challenging conventional and textbook accounts often forms an important part of understanding the past more fully. For instance, Charles Mann’s recent book on pre-Columbian America shows that virtually all of the information on the age, sophistication, and extent of American Indian culture in United States history survey textbooks is out of date (Mann, 2005). Thus, the factualist H-Bot offers an impoverished view of the past–just like high school textbooks and standardized tests.

Interestingly, the use of normalized information distance to identify strong consensus also allows H-Bot to uncannily answer some questions (which we did not include in the main testing) that seem to require visual interpretation. For example, although it cannot see the famous picture of Neil Armstrong next to an American flag on the surface of the moon, H-Bot had no trouble correctly answering the question below this photograph:

What is the astronaut in this picture exploring?

(a) The Sun
(b) The Arctic
(c) The Moon
(d) Pluto

There are far fewer pages on which the word “astronaut” appears along with either “arctic” or “Pluto” than it does with “moon.” “Astronaut” shows up on roughly the same number of Web pages with “sun” as it does with “moon” (11,340 pages versus 11,876) but since overall the number of Web pages that mention the “moon” is about one-quarter the number of pages that mention the “sun” (about 20 million versus 75 million), H-Bot understands that there is a special relationship between the moon and astronauts. Put another way, when a Web page contains the word “astronaut,” the word “moon” is far more likely to appear on the page than its normal frequency on the Web. H-Bot therefore doesn’t need to “understand” the Armstrong photograph or its history per se, or know that this event occurred in 1969–the documents in the massive corpus of the Web point to the overwhelming statistical proximity of answer (c) to the significant words in the question.

As with its other incarnation, H-Bot shows imperfections in taking the NAEP test. These infirmities differ from the problems related to rumors and falsehoods in the main edition of the software. Since, as the moon questions shows, H-Bot is hard-wired to look for close rather than distant relationships, it has trouble with the occasional NAEP question that is phrased negatively, i.e., “which one of these is NOT true…” Also, when asked which event comes first or last in a chronological series the algorithm blindly finds the most commonly discussed event. Again, such problems could be largely remedied with further programming, and they reflect the imperfections of H-Bot as a tool rather than the imperfections of online historical information.

Moreover, even with these weaknesses, when H-Bot was done processing the 33 multiple-choice questions from the fourth-grade NAEP American history exam, it had gotten 27 answers right, a respectable 82 percent. And as with the other version of H-Bot, when a larger (and more updated) index of the Web is available, it should do even better on these exams. The average fourth grader does far worse. In 1994, the most recent year for this test data, 69 percent knew of the American South’s agrarian origins, 62 percent identified Rosa Parks correctly, and a mere 41 percent understood the motivation behind the emigration of the Puritans and Pilgrims.

 

Conclusion: Automatic Discovery and the Futures of the Past

 

Scientists have grown increasingly pessimistic about their ability to construct the kind of “artificial intelligence” that was once widely seen as lurking just around the next computer (Mullins, 2005). In every decade since the Second World War, technologists have predicted the imminent arrival of a silicon brain equivalent to or better than a human’s, as the remarkable pace of software innovation and hardware performance accelerates. Most recently, the technology magnate Ray Kurzweil asserted in The Age of Spiritual Machines that by 2029 computer intelligence will not only have achieved parity with the human mind, it will have surpassed it (Kurzweil, 2000). Since the first attempts at machine-based reasoning by the Victorians, however, the mind repeatedly has proven itself far more elusive than technologists have expected. Kurzweil is surely correct that computers are becoming more and more powerful each year; yet we remain frustratingly distant from a complete understanding of what we’re trying to copy from the organic matter of the brain to the increasingly speedy pathways of silicon circuits.

But while the dawn of fully intelligent machines seems to constantly recede into the distance, the short-term future for more circumscribed tools like H-Bot looks very bright. One reason is that the Web is getting very big, very fast. In September 2002, Google claimed to have indexed two billion Web pages. Just fifteen months later, it was boasting about four billion. And, it only took eleven more months to double again to eight billion.4 These numbers matter a great deal because, as we have noted, the power of automatic discovery rests as much on quantity as quality. (Indeed, H-Bot works off of Google’s public application programming interface, or API, which permits access to only about 1.5 billion pages in its index; the company seems wary of providing access to the full eight billion.)

But, as it happens, the quality of what’s out on the public Web is also going to improve very shortly because Google is busy digitizing millions of those “thick books” that we generally read in “overstuffed chairs.” If automatic discovery can do so well working from the Web pages of high schoolers, think what it can do when it can prowl through the entire University of Michigan library. Finally, the algorithms for extracting meaning and patterns out of those billions of pages are getting better and better. Here, too, Google (and its competitors at Yahoo and Microsoft) are responsible. After all, a humble History PhD, trained in the history of science rather than the science of algorithms, created H-Bot. But Google and its competitors are hiring squadrons of the most talented engineers and PhDs, from a variety of fields such as computer science, physics, and of course mathematics, to work on automatic discovery methods. In the second quarter of 2005 alone, Google added 230 new programmers to improve its vast search empire; one computer science professor noted that the top one-third of his students in a class on search technology went straight to Google’s awaiting offers. Yahoo is not far behind in this arms race, and Microsoft is devoting enormous resources to catching up (Elgin, 2005). We are at a unique moment in human history when literally millions of dollars are being thrown into the quest for efficient mathematical formulas and techniques to mine the billions of documents on the Internet.

So H-Bot may only be able to score 82 percent on the NAEP U.S. History examination today; but its successors are likely to reach 95 or even 99 percent. Should we care? Most historians already know about the Pilgrims or at least know how to find out quickly. Why should a software agent that can answer such questions impress them? Isn’t this simply a clever parlor trick? We think that the answer is no, because these developments have significant implications for us as teachers and researchers.

H-Bot may never become “intelligent,” but it has already proven itself very smart at answering multiple-choice history questions. Most historians would likely view H-Bot’s factual abilities with disdain or condescension. Yet, paradoxically, we (and our compatriots in the world of education) have often decided that precisely such abilities are a good measure of historical “knowledge.” As history educator and psychologist Sam Wineburg has shown, we have spent the past 90 years bemoaning the historical “ignorance” of students based their “[in]ability to answer factual questions about historical personalities and events.” And the conclusions that have been drawn from that inability have been nothing short of apocalyptic. In 1943, the New York Times took time out from its wartime coverage to rebuke “appallingly ignorant” college freshmen for their scores on the national history exam it had administered the year before. Almost a half century later, Diane Ravitch and Chester Finn declared that low scores on the NAEP placed students “at risk of being gravely handicapped by . . . ignorance upon entry into adulthood, citizenship, and parenthood” (Wineburg, 2001, p. vii).

As Wineburg points out, it is, in part, technology that is responsible for our attachment to the factual, multiple-choice test. “We use these tests,” he writes, “not because they are historically sound or because they predict future engagement with historical study, but because they can be read by machines that produce easy-to-read graphs and bar charts” (Wineburg, 2004). Any instructor weighed down by a stack of unmarked history essays who sees a colleague walk away smiling from the Scantron machine with his or her grading completed knows this point all too well. But technology may also bring their demise or transformation. A year or two ago, our colleague and fellow historian Peter Stearns proposed the idea of a history analog to the math calculator, a handheld device that would provide students with names and dates to use on exams–a Cliolator, he called it, a play on the muse of history and the calculator. He observed that there would likely be resistance to the adoption of the Cliolator, as there had been by some educators to the calculator. But he also argued, rightly in our view, that it would improve history education by displacing the fetishizing of factual memorization.

When we discussed this idea with Peter, we at first focused on the ways that the Cliolator would be much harder to build than the calculator. After all, the calculator only needs to understand some basic principles of mathematics to answer a nearly infinite number of questions. The Cliolator would need to be loaded with very substantial quantities of historical information before it would be even modestly useful. Could you possibly anticipate every conceivable fact a student might need to know? But then we realized that we were already building a version of the Cliolator in H-Bot and that millions of people were uploading those millions of historical facts for us onto their Web pages.

The combination of the magnificent, if imperfect, collective creation of the Web with some relatively simple mathematical formulas has given us a free version of the Cliolator. And the handheld device for accessing it–the cell phone–is already in the pockets of most school kids. In a very short time, when cell phone access to Web-based materials becomes second nature and H-Bot (or its successors) gets a little bit better, students will start asking us why we are testing them on their ability to respond to questions that their cell phones can answer in seconds. It will seem as odd to them that they can’t use H-Bot to answer a question about the Pilgrims as it is today to a student who might be told that they can’t use a calculator to do the routine arithmetic in an algebra equation. The genie will be out of the bottle and we will have to start thinking of more meaningful ways to assess historical knowledge or “ignorance.” And that goes for not just high school instructors and state education officials but also the very substantial numbers of college teachers who rely on multiple-choice questions and Scantron forms.

If the future of historical teaching looks different in the age of dumb (but fast) machines, what about historical research? For historical researchers, the question is not whether mathematical techniques help us ferret out historical facts from vast bodies of historical accounts. Professional historians have long known how to quickly locate (and more importantly assess the quality of) the facts they need. Rather, the issue is whether these same approaches help to find things in historical sources. Can we mine digitized primary sources for new insights into the past? Here, a key weakness of H-Bot–its reliance on consensus views–turns out to be a virtue. After all, a key goal of the cultural historian is to ferret out the consensus–or mentalité–of past generations. When we do history, we are generally more interested in finding out what people believed than whether what they believed was “true.”

The potential for finding this out through automated methods has become much greater because of the vast quantities of those digitized primary sources that have suddenly become available in the past decade. Even a very partial list is astonishing in its breadth and depth: the Library of Congress’s American Memory project presents more than eight million historical documents. The Making of America site, organized by the University of Michigan and Cornell University, provides more than 1.5 million pages of texts from mid-nineteenth-century books and articles. ProQuest’s Historical Newspapers offers the full text of eight major newspapers including full runs of the New York Times and the Los Angeles Times. The Thomson Corporation’s 33-million-page Eighteenth Century Collections Online contains every significant English-language and foreign-language title printed in Great Britain in that period. Google’s digitization effort will dwarf these already massive projects that put a startlingly large proportion of our cultural heritage into digital form. And Google’s project has apparently sparked a digitization arms race. Yahoo has announced an “Open Content Alliance” that will, in partnership with libraries and the Internet Archive, digitize and make available public domain books (Open Content Alliance, 2005). The European Commission, worried that the EU will be left behind as these two mammoth American companies convert the analog past into the digital future, recently unveiled its own expansive effort to turn the paper in European libraries into electronic bits.

Our ability to extract historical nuggets from these digital gold mines is limited because many of these collections (unlike the public Web) can only be entered through a turnstile with a hefty tariff. (ECCO, for example, costs a half million dollars.) In addition, many of them have limited search capabilities. And the terms of access for the Google project remain murky. There is no Google that can search the collections of the “deep” and “private” Web. Even so, historians have already discovered enormous riches through the simplest of tools–word search.

Yet such efforts have not truly taken advantage of the potent tools available to the digital researcher. Many of the methods of digital humanities research are merely faster versions of what scholars have been doing for decades, if not longer. German philologists were counting instances of specific words in the Hebrew and Christian Bibles, albeit in a far more painstaking manner, before the advent of computerized word counts of digital versions of these ancient texts. And long before computer graphics and maps, the Victorian doctor John Snow was able to divine the origins of a London cholera outbreak by drawing marks signifying the addresses of infected people until he saw that they tended to cluster around a water pump.

Could we learn more using more sophisticated automatic discovery techniques such as the statistical tests used by H-Bot or the normalized information distance employed by its test-taking cousin? Historians–perhaps with the exception of their brief flirtation with quantitative history in the 1970s–have tended to view any automated or statistical aids to studying the past with suspicion. They prefer to view their discipline as an art or a craft rather than a science and they believe that the best historians look at “everything” and then reflect on what they have read rather than, for example, systematically sampling sources in the manner of a sociologist. They lionize the heroic labors of those who spent twenty years working through all of the papers of a Great Man. But the combination of the digitization of enormous swaths of the past and the overwhelming documentation available for some aspects of twentieth-century history will likely force them to reconsider whether historians, like sociologists, need to include “sampling”–especially systematic sampling–as part of their routine toolkit.5

The digital era seems likely to confront historians–who were more likely in the past to worry about the scarcity of surviving evidence from the past–with a new “problem” of abundance. A much deeper and denser historical record, especially one in digital form, seems like an incredible opportunity and gift. But its overwhelming size means that we will have to spend a lot of time looking at this particular gift horse in the mouth–and we will probably need sophisticated statistical and data mining tools to do some of the looking (Rosenzweig, 2003).6

These more systematic approaches to mining historical documentation will also need to take advantage of some of the mathematical approaches that we outline here. Eighteenth-century historians, for example, will surely want to count the number of references to “God” and “Jesus” in the writings of Enlightenment thinkers and Founding Fathers (and Mothers). But why not go beyond that to consider the proximity of different terms? Are some writers more likely to use religious language when talking about death than other subjects? Has that changed over time? Are women writers more likely to use words like “love” or “passion” when talking about marriage in the nineteenth century than in the eighteenth century? Such questions would be relatively easy to answer with the mathematical formulas described here–assuming, of course, proper access to the appropriate bodies of digitized sources.

It would be an illusion to see such approaches as providing us with a well-marked path to historical “truth” or “certainty,” as was sometimes promised by some of the most enthusiastic promoters of quantitative history in the 1960s and 1970s. History will never be a science but that doesn’t mean that a more systematic use of evidence and more systematic techniques for mining large bodies of evidence would not assist us in our imperfect quest to interpret the past (Thomas, 2005). Arthur Schlesinger, Jr., may be right to argue skeptically that “almost all important questions are important precisely because they are not susceptible to quantitative answers” (Schlesinger in Thomas, 2005, p. 56). But that doesn’t mean that quantitative and systematic methods can’t help us to develop those qualitative answers. Historical data mining might be best thought of as a method of “prospecting,” of trying out interesting questions and looking for rich veins of historical evidence that we should examine more closely. Indeed, that kind of prospecting is precisely is what John Snow was doing when he marked down cholera deaths on a London map.

Finally, although H-Bot currently answers only simple historical questions, the software ultimately suggests the possibility of some considerably more complex ways that one might analyze the past using digital tools. Might it be possible to use other theories from computer science to raise and answer new historical questions? H-Bot uses just a few of the many principles of information theory–normalized information distance, measures of statistical significance, and methods of automated text retrieval.7 But these are merely the tip of the iceberg. Are there other, perhaps even more revealing theories, which could be applied to historical research on a digital corpus?

Author Bio

Daniel J. Cohen is Director of Research Projects at the Center for History and New Media (CHNM) and Assistant Professor of History at George Mason University.

Roy Rosenzweig is the founder and Director of CHNM and Mark and Barbara Fried Professor of History and New Media at George Mason University.

They are the co-authors of Digital History: A Guide to Gathering, Preserving, and Presenting the Past on the Web (University of Pennsylvania Press).

Footnotes:

1 The most well-known of these programs in the National Institute of Standards and Technology’s Text REtrieval Conference (TREC), also sponsored by the U.S. Department of Defense. See http://trec.nist.gov/.

2 Much of this theory of information distance grows out of the work of A. N. Kolmogorov (Kolmogorov, 1965).

3 Our thanks to James Sparrow for pointing th/is out to us and for his other helpful comments on this article.

4 These are reported numbers, and subject to some debate (Search Engine Watch, 2005; Caslon Analytics, 2005).

5 Some historical sociologists have pointed the way to doing such research using the coding of texts and content analysis. See Segal and Hansen, 1992; Burstein et al., 1995; Griswold, 1981; Gamson and Modigliani, 1987.

6 In a recent grant proposal to the National Endowment for the Humanities, Gregory Crane proposes to do just that: analyze and develop advanced linguistic and statistical tools for the humanities (Crane, 2005).

7 A good introduction to these theories for humanists is Widdows, 2005.

References

Academie Des Beaux-Arts, “La Fondation Claude Monet À Giverny,” at http://www.academie-des-beaux-arts.fr/uk/fondations/giverny.htm, accessed on 21 June 2005.

Art Institute of Chicago, “Claude Monet’s The Artist’s House at Argenteuil, 1873,” at http://www.artic.edu/artexplorer/search.php?tab=2&resource=406, accessed on 21 June 2005.

Jonathan Brent and Vladimir Naumov, 2003. Stalin’s Last Crime: The Plot Against the Jewish Doctors, 1948-1953. New York: HarperCollins.

Paul Burstein, Marie R. Bricher, and Rachel L. Einwohner, 1995. “Policy Alternatives and Political Change: Work, Family, and Gender on the Congressional Agenda, 1945-1990,” American Sociological Review, volume 60 (February), pp. 67-83.

Caslon Analytics, “Net Metrics & Statistics Guide,” at http://www.caslon.com.au/metricsguide2.htm, accessed on 12 September 2005.

CenNet, “Giverny,” at http://www.cennet.co.uk/gardens-giverny.html, accessed on 21 June 2005.

Daniel J. Cohen, 2005. “By the Book: Assessing the Place of Textbooks in U.S. Survey Courses,” Journal of American History, volume 91 (March), pp. 1405-1415.

Gregory Crane, 2005. “An Evaluation of Language Technologies for the Humanities,” grant proposal to the National Endowment for the Humanities, draft of 25 June 2005, in possession of the authors.

Ben Elgin, 2005. “Revenge of the Nerds–Again,” BusinessWeek Online, July 28, at http://www.businessweek.com/technology/content/jul2005/tc20050728_5127_tc024.htm, accessed on 12 September 2005.

William Gamson and Andre Modigliani, 1987. “The Changing Culture of Affirmative Action,” Research in Political Sociology, volume 3, pp. 137-177.

John A. Garraty and Eric Foner (editors), 1991. The Reader’s Companion to American History, New York: Houghton Mifflin.

Giverny, “The village of Giverny,” at http://giverny.org/giverny/, accessed on 21 June 2005.

The Great Outdoors, “Monet’s garden, France,” at http://thegreatoutdoors.com.au/display.php?location=europe&ID=7802, accessed on 21 June 2005.

Wendy Griswold, 1981. “American Character and the American Novel: An Expansion of Reflection Theory in the Sociology of Literature,” American Journal of Sociology, 86, pp. 740-765.

Bernard Grun, 1982. The Timetables of History : A Horizontal Linkage of People and Events, New York: Simon and Schuster.

Gertrude Himmelfarb, 1996. “A Neo-Luddite Reflects on the Internet,” Chronicle of Higher Education, November 1, p. A56.

David Hochman, 2004. “In Searching We Trust,” New York Times, March 14, section 9, page 1.

A.N. Kolmogorov, 1965. “Three approaches to the quantitative definition of information,” Problems in Information Transmission, volume 1, number 1, pp. 1-7.

Gary J. Kornblith and Carol Lasser (editors), 2001. “Teaching the American History Survey at the Opening of the Twenty-First Century: A Round Table Discussion,” Journal of American History, volume 87 (March), at http://www.indiana.edu/~jah/textbooks/2001/, accessed 12 September 2005.

Ray Kurzweil, 2005. The Age of Spiritual Machines: When Computers Exceed Human Intelligence, New York: Penguin.

Charles C. Mann, 2005. 1491: New Revelations of the Americas Before Columbus, New York: Knopf.

Michael J. McCarthy, 1990. “It’s Not True About Caligula’s Horse; Britannica Checked,” Wall Street Journal, April 22.

Forrest McDonald, 2000. “Hamilton, Alexander” In: American National Biography Online, New York: Oxford University Press.

Justin Mullins, 2005. ” Whatever happened to machines that think? ” New Scientist , 23 April.

New-York Historical Society, “Alexander Hamilton: The Man Who Made Modern America,” at http://www.alexanderhamiltonexhibition.org/timeline/timeline1.html, accessed on 12 September 2005.

Open Content Alliance, 2005. Open Content Alliance, http://www.opencontentalliance.org/

Roy Rosenzweig, 2003. “Scarcity or Abundance: Preserving the Past in a Digital Era,” American Historical Review, volume 108, number 3 (June), pp. 735-762.

Search Engine Watch, “Search Engine Sizes,” at http://searchenginewatch.com/reports/article.php/2156481, accessed on 12 September 2005.

Mady Wechsler Segal and Amanda Faith Hansen, 1992. “Value Rationales in Policy Debates on Women in the Military: A Content Analysis of Congressional Testimony, 1941-1985,” Social Science Quarterly, volume 73 (June), pp. 296-309.

James Surowiecki, 2004. The Wisdom of Crowd: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, New York: Doubleday.

William G. Thomas, III, 2005. “Computing and the Historical Imagination” In: Susan Schreibman, Raymond George Siemens, and John Unsworth (editors). A Companion To Digital Humanities, Malden, MA: Blackwell Publishers, pp. 56-68.

Paul Vitanyi and Rudi Cilibrasi, 2005. “Automatic Meaning Discovery Using Google,” at http://arxiv.org/abs/cs/0412098, accessed on 30 April 2005.

David W. Walker, “Who am I?” at http://users.cs.cf.ac.uk/David.W.Walker/who.html, accessed on 12 September 2005.

Dominick Widdows, 2005. Geometry and Meaning, Stanford, CA: CLSI Publications.

Wikitravel, “Giverny,” at http://wikitravel.org/en/Giverny, accessed on 21 June 2005.

Sam Wineburg, 2001. Historical Thinking and Other Unnatural Acts: Charting the Future of Teaching the Past, Philadelphia: Temple University Press.

Sam Wineburg, 2004. “Crazy for History,” The Journal of American History, volume 90 (March), pp. 1401-1414.

University of North Carolina at Pembroke, “Claude Monet: Life and Work,” at www.uncp.edu/home/canada/work/markport/travel/philly/art.htm, accessed on 21 June 2005.

Top

DONATE
Support
the center today.
Each year, the Roy Rosenzweig Center for History and New Media’s websites receive over 2 million visitors, and more than a million people rely on its digital tools to teach, learn, and conduct research. Donations from supporters help us sustain those resources.