"N + 1: A Plea for Cross-Domain Data in the Digital Humanities"

First page

Citation:”N + 1: A Plea for Cross-Domain Data in the Digital Humanities.” Debates in the Digital Humanities 2016. Ed. Matthew K. Gold and Lauren F. Klein. University of Minnesota Press, 2016: 559-568.

 

Citation: “Drafts for Alan Liu, Against the Cultural Singularity (book in progress.” Alan Liu, 2 May 2016. https://liu.english.ucsb.edu/drafts-for-against-the-cultural-singularity

Excerpt

The following is draft work (notes and bibliography not included) from one of my books in progress tentatively titled Against the Cultural Singularity: Digital Humanities & Critical Infrastructure Studies. Excerpted  are a few portions from the beginning of the manuscript that bear on the critical potential of the digital humanities and critique.

For a talk including this material as well as additional excerpts from my book in progress, see the video recording of my contribution to the Workshop on “Frontiers of DH: Humanities Systems Infrastructure,” University of Canterbury, 12 November 2015 (delivered as part of a series in New Zealand during my Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

2 May 2016

My aim in this book is to make a strategic intervention in the development of the digital humanities.  Following up on my 2012 essay, “Where is Cultural Criticism in the Digital Humanities?”, I call for digital humanities research and development informed by, and able to influence, the way scholarship, teaching, administration, support services, labor practices, and even development and investment strategies in higher education intersect with society, where a significant channel of the intersection between the academy and other social sectors, at once symbolic and instrumental, consists in shared but contested information-technology infrastructures.  I first lay out in the book a methodological framework for understanding how the digital humanities can develop a mode of critical infrastructure studies.  I then offer a prospectus for the kinds of infrastructure (not only research “cyberinfrastructures,” as they have been called) whose development the digital humanities might help create or guide.  And I close with thoughts on how the digital humanities can contribute to ameliorating the very idea of “development”–technological, socioeconomic, and cultural–today.

Method (1)

The first step–framing for the digital humanities a suitable methodological framework for critical digital infrastructure studies–is challenging, given that the digital humanities are maturing after the late twentieth-century bloom of humanities “theory” and “cultural criticism,” which I here group together (grosso modo) under the name “critique”. . . .

 

Citation: “Drafts for Against the Cultural Singularity (book in progress).” Alan Liu, 2 May 2016. doi:10.21972/G2B663. https://liu.english.ucsb.edu/drafts-for-against-the-cultural-singularity

The following is draft work (notes and bibliography not included) from one of my books in progress tentatively titled Against the Cultural Singularity: Digital Humanities & Critical Infrastructure Studies. Excerpted  are a few portions from the beginning of the manuscript that bear on the critical potential of the digital humanities and critique.

For a talk including this material as well as additional excerpts from my book in progress, see the video recording of my contribution to the Workshop on “Frontiers of DH: Humanities Systems Infrastructure,” University of Canterbury, 12 November 2015 (delivered as part of a series in New Zealand during my Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

2 May 2016

My aim in this book is to make a strategic intervention in the development of the digital humanities.  Following up on my 2012 essay, “Where is Cultural Criticism in the Digital Humanities?”, I call for digital humanities research and development informed by, and able to influence, the way scholarship, teaching, administration, support services, labor practices, and even development and investment strategies in higher education intersect with society, where a significant channel of the intersection between the academy and other social sectors, at once symbolic and instrumental, consists in shared but contested information-technology infrastructures.  I first lay out in the book a methodological framework for understanding how the digital humanities can develop a mode of critical infrastructure studies.  I then offer a prospectus for the kinds of infrastructure (not only research “cyberinfrastructures,” as they have been called) whose development the digital humanities might help create or guide.  And I close with thoughts on how the digital humanities can contribute to ameliorating the very idea of “development”–technological, socioeconomic, and cultural–today.

Method (1)

The first step–framing for the digital humanities a suitable methodological framework for critical digital infrastructure studies–is challenging, given that the digital humanities are maturing after the late twentieth-century bloom of humanities “theory” and “cultural criticism,” which I here group together (grosso modo) under the name “critique.”  The late-comer status of the digital humanities in this regard is epitomized in the field’s debate a few years ago about “hack versus yack.”  Should digital humanists primarily program, build, or make (hack)?  Or should they instead critically interpret and theorize information media, past and present, in a manner much like normative humanities research (yack)?  At core, the debate is not really about theorized critique versus something other than such critique.  Instead, the debate situates the digital humanities at a fork between two branches of late humanities critique.  One, a hack branch (sometimes referred to as “critical making”), affiliates with, but is often more concretely pragmatic, than “thing theory,” the new materialism, actor-network theory, assemblage theory, and similar late poststructuralist theories.  The other, a yack branch, descends from the not unrelated critical traditions of Frankfurt School “critical theory,” deconstruction, Foucauldian “archaeology,” cultural materialism, postcolonial theory, and gender and race theory–especially as all these have now been inflected by media studies.

In short, the question is not whether the digital humanities should include theorized critique.  At some level, and especially in some branches, the field already does by virtue simply of belonging to the family of the contemporary humanities.  Instead, the question is what sort of critique is uniquely appropriate and purposive for the digital humanities.  What critique, in other words, not only allows the field to assist mainstream humanities critique but could not be conducted except through digital humanities methods that use technology self-reflexively as part of the very condition, and not just facility, of critically knowing and acting on culture today?

The answer to this question, I suggest, is critique at the level of, and articulated through, infrastructure–where “infrastructure,” the social-cum-technological milieu that at once enables the fulfillment of human experience and enforces constraints on that experience, today has much of the same scale, complexity, and general cultural impact as the idea of “culture” itself.  Indeed, it may be that in late modernity when the bulk of life and work occurs in organizational institutions of one kind or another, the experience of infrastructure at institutional scales (undergirded by national or regional infrastructures such as electricity grids and global-scale infrastructures such as the Internet) is operationally the experience of “culture.”  Put another way, the word “infrastructure” can now give us the same kind of general purchase on social complexity that Stuart Hall, Raymond Williams, and others sought when they reached for their all-purpose word, “culture.”  Consider the way dystopian films produced at the onset of the digital information age such as Blade Runner (1982) and the Mad Max films (beginning in 1979) characterized whole cultures by foregrounding infrastructure–in the former: glistening, noir cityscapes defined by transportation and media technology; in the latter: desert landscapes defined by fuel and water supply systems. Those films gave a foretaste of the way late-modern infrastructure is increasingly the mise-en-scène of culture.  Daily life steeps us in pervasive encounters with transportation, media, and other infrastructures that do not just neutrally convey the experience of culture but are visibly parts of our cultural experience. Late modernity is thus car culture, cable TV culture, Internet culture, smartphone culture, and any other kind of “cool” culture where, as I studied in my Laws of Cool, “cool” is a cultural affect of both “smart” technologies and the knowledge workers who use them to be, or at least look, smart.

The consequence of such convergence between infrastructure and culture for critique may be predicted as follows: especially in the digital humanities, critique must now begin to focus on infrastructure in order to have any hope of creating tomorrow’s equivalents of the great cultural-critical statements of the past. Tomorrow’s E. P. Thompson writing about the making of the working class, C. Wright Mills about white collars, Raymond Williams about culture and society, Michel Foucault about discipline, Judith Butler about gender and performativity, Donna Haraway about cyborgs, or Homi Bhaba about hybridity–among many more who could be cited–will need to include in their critiques attention to infrastructure as that cyborg being whose making, working, disciplining, performance, gender formation, and hybridity are increasingly part of the core identity of late modern culture.

What would the method for such a digital humanities cultural criticism focused on infrastructure actually look like? [material elided here]  . . . [P]rosaically, the style of digital humanities infrastructural critique I imagine–one that takes advantage of modes of thinking already prevalent in the field–may be called lightly-antifoundationalist.  The question that I concoct this phrase to answer is how much antifoundationalism–or, perhaps “anti-groundwork” (to allude to Marx’s Grundrisse der Kritik der Politischen Ökonomie)–is actually useful for critical infrastructure studies.  Mainstream humanistic critique (the “hermeneutics of suspicion” that Rita Felski has recently taken to task in her critique of critique) has often been antifoundationalist all the way down according to a three-stage logic that might be outlined as follows.

In its first logical moment, critique recognizes that the “real,” “true,” or “lawful” groundwork (i.e., infrastructure) for anything, especially the things that matter most to people, such as the allocation of goods or the assignation of identity, is ungrounded.  For example, while there are material reasons for resource allocation and the social relations of force needed to do that dirty deed–i.e., for political economy and society–any particular political economy and society are arbitrary and, in the last analysis, unjust.  Political economy and society are thus not grounds but, to play on the word, precisely groundworks: particular ways of working the ground (i.e., a mode of production) supported by discursive, epistemic, psychic, and cultural institutions for ensuring that the work continues in the absence of rational or moral foundation.

In its second logical moment, critique then goes antifoundationalist to the second degree by criticizing its own standing in the political-economic system–a recursion effect attested in now familiar, post-May-1968 worries that critics themselves are complicit in elitism, “embourgeoisment,” “recuperation,” “containment,” and majoritarian identity, not to mention tenure.

Finally, in its third logical moment, critique seeks to turn its complicity to advantage–for example, by positioning critics as what Foucault called embedded or “specific intellectuals” acting on a particular institutional scene to steer social forces.  A related idea is to go “tactical” in the manner theorized by Michel de Certeau, who argued that people immured in any system can appropriate that system’s infrastructure through bottom-up agency for deviant purposes (as in his paradigm of jaywalking in the city).  Media critics, including new media critics, have generalized de Certeau’s notion in the name of “tactical media,” meaning media whose platforms, channels, interfaces, and representations can be appropriated by users for alternative ends.

In general, the digital humanities tend to do things according to methods that slice out from the above total arc of critique just the latter tactical moment.  Such slicing–hacking critique to severe its roots from purist antifoundationalism–brings digital humanities critique into the orbit of several late- or post-critical approaches with a similar style (style rather than full-blown theory precisely because they eschew foundational purity).  One approach that James Smithies has associated with the digital humanities is “postfoundationalism” in his “Digital Humanities, Postfoundationalism, Postindustrial Culture.”  Borrowing from the philosopher of science Dimitri Ginev, Smithies argues that postfoundationalism is “an intellectual position that balances a distrust of grand narrative with an acceptance that methods honed over centuries and supported by independently verified evidence can lead, if not to Truth itself, then closer to it than we were before” (¶ 26).  Postfoundationalism is thus well matched to the digital humanities, Smithies suggests, if we think of the digital humanities as “a process of continuous methodological and . . . theoretical refinement that produces research outputs as snapshots of an ongoing activity rather than the culmination of ‘completed’ research” (¶ 29).  A related idea is “critical technical practice,” which Michael Dieter (“The Virtues of Critical Digital Practice”)–building on Philip Agre’s writings on artificial intelligence research–makes a goal of the digital humanities.  Dieter quotes from Agre: “The word ‘critical’ here does not call for pessimism and destruction but rather for an expanded understanding of the conditions and goals of technical work. . . .  Instead of seeking foundations it would embrace the impossibility of foundations, guiding itself by a continually unfolding awareness of its own workings as a historically specific practice.”  Other ideas that are lightly-foundationalist in this way, though not to my knowledge yet applied to the digital humanities, include Bruno Latour’s “compositionism” (fixed on neither absolute foundations of knowledge nor absolutist refutations of such foundations but instead on mixed, impure, make-do, and can-do compositions of multiple positions; “An Attempt at a ‘Compositionist Manifesto’,” PDF) and Ackbar Abbas and David Theo Goldberg’s “poor theory” (which uses “tools at hand” and “limited resources” to engage “with heterogeneous probings, fragmentary thinking, and open-endedness” in resistance to “totalization, restriction, and closure”) (“Poor Theory: Notes Toward a Manifesto”, PDF).

All these lightly-antifoundationalist approaches are tactical rather than strategically pure because their very potential for critique arises from polluting proximity to, and sometimes even partnership with, their objects of critique.  Unlike distantiated critique, that is, tactical critique (as the root of the word “tactic” might indicate) makes contact.  Smithies thus notes postfoundationalism’s function as a “bridging concept” for the “interdependence” and “entanglement” of the digital humanities with postindustrialism (¶ 8, 3, 2).  Indeed, I add that all the approaches thus far mentioned as a “light foundation” for critical infrastructure studies are similarly contaminated by the double principle of efficiency and flexibility, which (as I articulated in my The Laws of Cool) is the two-stroke engine of the postindustrial mode of production.  As it were, all the approaches I have mentioned are instances of “lean” and “just-in-time” critique and thus not dissimilar in spirit to the in-house critique that postindustrial corporations at the end of the twentieth century began to design into their own production lines by famously empowering workers to “stop the line” ad hoc (or, less catastrophically, to suggest incremental improvements) when they saw something wrong.  Such dirty contact with postindustrialism is both the weakness and strength of lightly-antifoundationalist approaches, where weakness means being swallowed up by the system and strength comes from getting close enough to the system to know its critical points of inflection, difference, and change.  If, as Smithies says, the digital humanities are “deeply entangled” in postindustrialism, in other words, entanglement need not be the same as equivalence.  It is also engagement.

The critical potential of this tendency in the digital humanities to be lightly-antifoundationalist can now be stated: it is precisely the ability to treat infrastructure not as a foundation, a given, but instead as a tactical medium that opens the possibility of critical infrastructure studies as a mode of cultural studies.  And it is such cultural studies that will allow the digital humanities to fulfill their final-cause critical function at the present time, which is to help adjudicate how academic infrastructure connects higher education to, but also differentiates it from, the workings of other institutions in advanced technological societies.  The critical function of the digital humanities going forward, in other words, is to assist in shaping smart, ethical academic infrastructures that not only further normative academic work (research, pedagogy, advising, administration, etc.) but also intelligently transfer some, but not all, values and practices in both directions between higher education and today’s other powerful institutions–business, law, medicine, government, the media, the creative industries, NGOs, and so on.

Method (2)

At present, some of the most influential general understandings of infrastructure cited by digital humanists such as Sheila Anderson and James Smithies studying humanities cyberinfrastructure in particular have been the Large Technical Systems (LTS) approach, stemming originally from the historian Thomas Hughes’s Networks of Power (1983), and the information-ethnography approach stemming from Susan Leigh Star, Geoffrey Bowker, and their circle. Good expositions of both are combined in one of the best conceptualizations of infrastructure I have so far found: a document of 2007 titled “Understanding Infrastructure: Dynamics, Tensions, and Design” (PDF) (whose authors include Bowker) representing the final report to the National Science Foundation of a workshop it sponsored.

Adding to these general approaches to infrastructure, I borrow in this book another portfolio of thought that to my knowledge has not yet been introduced directly to infrastructure studies. It is also a portfolio largely unknown in the digital humanities and, for that matter, in the humanities as a whole even though it is broadly compatible with humanities cultural criticism.  The portfolio consists of the “neoinstitutionalist” approach to organizations in sociology and, highly consonant, also “social constructionist” (especially “adaptive structuration”) approaches to organizational infrastructure in sociology and information science.  Taken together, these approaches explore how organizations are structured as social institutions by so-called “carriers” of beliefs and practices (i.e., culture), among which information-technology infrastructure is increasingly crucial.  Importantly, these approaches are a social-science version of what I have called lightly-antifoundationalist.  Scholars in these areas “see through” the supposed rationality of organizations and their supporting infrastructures to the fact that they are indeed social institutions with all the irrationality that implies.  But they are less interested in exposing the ungrounded nature of organizational institutions and infrastructures (as if it were possible to avoid or get outside them) than in illuminating, and pragmatically guiding, the agencies and factors involved in their making and remaking.  Such approaches are thus inherently a good match for the epistemology of building, unbuilding, and rebuilding in the digital humanities.

More than a good match, neoinstitutionalism and the social science of organizational technologies offer exactly the right tactical opening for a digital humanities cultural criticism because they are all about the site on which the already existing critical force of the digital humanities is pent up: institutional forms of technologically-assisted knowledge work.  After all, the digital humanities stand in contrast to new media studies and network critique among cousin fields as the branch of digitally-focused humanities work that has been primarily focused on changing research, authorship, dissemination, and teaching inside (and across) academic institutions and related cultural or heritage institutions rather than on broader commentary directed externally at society and social justice.  The digital humanities are all about developing analytical, publishing, curatorial, and hybrid-pedagogical tools and practices at scales ranging from standalone projects to federated or regional frameworks; creating new university programs and centers; changing the accepted notion of academic careers (e.g., to include “alt-ac” alternative academic careers); and, ultimately, instilling a new scholarly digital ethos in the academy in the name of “collaboration” and “open access.”  As a consequence, the existing critical energy of the digital humanities–sometimes quite passionate and even militant–has been primarily devoted to such institutional issues.  Breaking down the paywalls of closed publication infrastructures, for instance, is the digital humanities version of storming a university administration building in the 1970s.

Can neoinstitutional and social-structuration-of-technology approaches to understanding the evolving relation between the academic institution and today’s more domineering institutions (most notably, business and government) help the digital humanities release its intramural critical energy?  Can that release help propel not just change in higher education but, through higher education and the technological infrastructures that mediate its relationship to other institutions, also extramural change in the larger society that higher ed contributes to?  (Besides its focus on culture, I note, one of the special strengths of neoinstitutionalism that make it attractive to add it to Large Technical System analyses of infrastructure is that it is especially attuned to studying change and divergence among dominant institutional systems.)  In short, can the considerable existing intelligence, idealism, and moral force of the digital humanities be redirected from being only an instrument of institution work to becoming through interventions in instrumental infrastructure also a way to act on institutions and their wider social impact?

But I do not wish to overreach, which is also why I think an approach focused on institutions and their infrastructures is particularly appropriate.  Ultimately, the digital humanities field must be critical in a way that does not ask it inauthentically to reach beyond its expertise and mandate to bear exaggerated responsibility for larger social phenomena.  Acting out through the digital humanities about larger social issues is necessary.  But such actions must be complemented by creating infrastructures and practices that make their social impact by being what Susan Leigh Star called “boundary objects”–in this case boundary objects situated between the academic institution and other major social institutions.  It is in this boundary zone–just as one example, “content management system” infrastructures whose use by scholars oscillates between corporate “managed” and “open community” philosophies–that higher education can most pertinently influence, and be influenced by, other institutions through what I earlier called “shared but contested information-technology infrastructures.”  It is in this boundary zone of hybrid scholarly, pedagogical, and administrative institutional infrastructure that we need the attention of skilled and thoughtful digital humanists, even if the interventions they make are not called anything as ambitious as “activism” but instead simply “building.”

[End of excerpt]

“Practice and Theory of ‘Distant Reading’ — An Introductory Workshop on Digital Humanities Methods.” University of San Francisco, 1 March 2016.

  • Abstract: In this beginner’s hands-on workshop and discussion, Alan Liu will introduce the idea of “distant reading” and some of the commonly used digital humanities methods and tools used to pursue it in digital literary studies, digital history, sociology, and other humanities and social science disciplines. Methods covered include text analysis, topic modeling, and social network analysis. Workshop participants will try their hand at one or more tools used for these methods, aiming not for mastery or even competence but just to capture an interesting “souvenir,” e.g., a screenshot. (For the purposes of the workshop, even failed attempts can produce an interesting souvenir.)  Liu will then lead a broader discussion based on the souvenirs about the opportunities and limitations of digital humanities methods. (A Web site for the workshop with detailed agenda and resources will be made available in advance to enrolled workshop participants.)
    • Workshop Agenda
    • Workshop “Souvenirs” (Examples and screenshots produced by workshop participants)
    • Workshop Workstation Set-up (Software, data resources, and workspace for the workshop. This page is designed to aid U. San Francisco technical staff in setting up the machines in the lab. However, workshop participants can use the specs to set up a duplicate of the working environment for the workshop on their own computers if they wish.)

 

“The Future of the Humanities / The Future and the Humanities.” University of San Francisco. 29 February November 2016.

  • Abstract: Drawing on research and advocacy conducted by the 4humanities.org initiative that he co-founded, Alan Liu discusses the contemporary public perception of the humanities, methods of using digital research and communications to develop effective humanities advocacy, and the broader question of the “future” of humanities disciplines, many of which consider history and the past to be their core. What is the relationship of the humanities to the future? And how can designing a stance on humanities and the future position the humanities disciplines to draw on, but also to help reform, today’s power discourses of “invention,” “innovation,” “disruption,” and “creativity”? The talk details in particular the 4Humanities “WhatEvery1Says” project, which uses digital methods to study a large corpus of media and other public speech about the humanities in order to assist the humanities in reframing the debate.

“How to Be a Humanist in the Year 2030: Digital Humanities and the New Norms of Scholarship (A Prophecy).” Critical Speaker Series. University of North Carolina, Chapel Hill. 10 February 2016.

“Key Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” Critical Speakers Series. University of North Carolina, Chapel Hill, 9 February 2016.

  • Abstract: How do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities, and vice versa? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”

Sessions at the 2016 MLA Convention related to digital humanities research, teaching, or the direction of the DH field, with some overlap with new media studies, writing studies, editing, and other topics. (Some sessions listed here are not centrally on DH but include at least one relevant paper.)

Online versions of list: .docx | .pdf

This list is compiled by Alan Liu, U. California, Santa Barbara (with kudos to Mark Sample for the idea, based on his listing of MLA DH sessions at previous MLA conventions). Please send Alan corrections and notices of sessions he has missed: ayliu@english.ucsb.edu

List last revised: 23 Dec 2015

“Key Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” University of Victoria, Wellington. 1 December 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: How do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities, and vice versa? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”

 

“WhatEvery1Says About the Humanities.” University of Otago, Dunedin. 27 November 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: Drawing on research he directs for the 4humanities.org initiative, Alan Liu discusses the sociocultural context and digital-humanities methods involved in 4Humanities’s ongoing study of public discourse on the humanities. How does data mining and text analyzing large repositories of newspapers, magazines, etc., (e.g., through “topic modeling”) help put in perspective the themes–some might call them “memes”–declared in headlines about the decline of the humanities, the crisis of the humanities, etc.? What is the larger, ambient field of discourse about, and by, humanists behind those headlines? For example, what does it mean that obituaries, wedding announcements, and similar least particles of journalistic media, mention the association of people with humanities education, fields, or institutions? What does the “heat map” of hot discourse about, but also cool background radiation from, the humanities look like?
  • Event announcement.

 

“Literature+.” University of Otago. 27 November 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: Starting with a talk by Alan Liu on his experience teaching the digital humanities in his Literature+ classes and other digital humanities classes (and on the larger issues of “hybrid pedagogy,” “MOOCs,” and “EdTech” in the background), this workshop is a chance for participants to think together about the future of teaching in the humanities. There are many past practices and formats of teaching that humanists have idealized—e.g., the tutorial on the “Oxbridge” model, the Socratic method or “dialectic” in a classical sense, the seminar, or, put in its most normalized modern mode, “class discussion”—even while humanists are caught up in such dreary antitheses to their ideal as large lecture courses titled “Introduction to …” regimented by teaching-assistant-led sections, patrolled by plagiarism-catching algorithms, and so on. How will humanist pedagogical ideals and practices adapt to the digital age? What is the relation, for example, between a Socratic seminar and either “hybrid pedagogy” or a MOOC with augmented peer-to-peer interactions?
  • Event announcement.

 

“Against the Cultural Singularity: Digital Humanities and Critical Infrastructure Studies.” Workshop on “Frontiers of DH: Humanities Systems Infrastructure,” University of Canterbury. 12 November 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: Following up on the question asked in the title of his 2012 essay “Where is Cultural Criticism in the Digital Humanities?”, Alan Liu will present drafts from a book that imagines modes of cultural criticism–in particular, critical infrastructure studies–appropriate and native to the digital humanities. His talk focuses on the role of technology infrastructure in (and between) neoliberalism’s major “knowledge work” institutions (including higher education). Can digital humanities research and development be redirected from being primarily instruments of institution work to becoming also ways to act on institutions and their wider social impact, in part through intelligent and ethical interventions in infrastructure? How do specifically digital humanities research and teaching infrastructures fit in that enterprise, which resembles but differs from “enterprise technology systems”?
  • Video Video of the lecture. (55 min.)

 

“The Future of the Humanities / The Future and the Humanities.” University of Canterbury. 5 November 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: Drawing on research and advocacy conducted by the 4humanities.org initiative that he co-founded, Alan Liu discusses the contemporary public perception of the humanities, methods of using digital research and communications to develop effective humanities advocacy, and the broader question of the “future” of humanities disciplines, many of which consider history and the past to be their core. What is the relationship of the humanities to the future? And how can designing a stance on humanities and the future position the humanities disciplines to draw on, but also to help reform, today’s power discourses of “invention,” “innovation,” “disruption,” and “creativity”? The talk details in particular the 4Humanities “WhatEvery1Says” project, which uses digital methods to study a large corpus of media and other public speech about the humanities in order to assist the humanities in reframing the debate.

 

“Key Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” University of Canterbury. 28 October 2015. (Lecture delivered as part of a series in New Zealand during Fulbright Specialist residency at U. Canterbury, October-November, 2015.)

  • Abstract: How do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities, and vice versa? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”
  • Event announcement.

 

Citation: “Hello (again), world!.” Alan Liu, 4 October 2015. https://liu.english.ucsb.edu/hello-again-world/

This is the inaugural message I posted to the new “digital-humanities@lsmail.ucsb.edu” listserv at UC Santa Barbara, which I started in October 2015. The posting was made on 4 October 2015.

Excerpt

Hello (again), world!

“Hello, world!” is the customary first output for a beginner trying out a programming language. At UC Santa Barbara, many of us were saying hello, world! to the digital humanities as early as the start of the 1990s, though the name for the field had not yet been invented….

So, the underlying question that motivates me to start this digital humanities listserv now in 2015—some 20 years after we all began the great digital adventure at UCSB—is: what next? How can we exploit our advantage as early movers in the field (and in the related social science, arts, and other digital fields whose collaboration with the humanities is part of the longtime DNA of digital studies on campus) in a way that builds the next generation of digital humanities at UCSB? For example, would it be possible to exploit our unique strengths by creating a unified intellectual agenda—supported by publications, conferences, curricula, etc.—for the “digital humanities” and “new media studies”? (That unified framework doesn’t really exist yet nationally or internationally. I am amazed at how many scholars, artists, social scientists, and engineers I know working on new media or network studies with whom I have no opportunity to collaborate in conferences, co-editions, journal venues, courses, or institutional programs because such apparatus now tends to be either for “digital humanities” in a narrow sense or for “new media studies.”)

 

“Key Trends in Digital Humanities – How the Digital Humanities Challenge the Idea of the Humanities.” Siberian Federal University, 25 September 2015.

  • Abstract: What are the digital humanities? And how do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”


“N + 1: A Plea for Cross-Domain Data in the Digital Humanities.” Siberian Federal University, 21 September 2015.

  • Abstract: In experimenting with text analysis, machine learning, visualization, and other methods, digital humanists often study materials collected from specific segments of the human documentary record–for example: a study corpus consisting just of one of the following at a time: novels, poems, letters, newspapers, historical maps, crime records, political speeches, etc. Such corpora also tend to be tuned to the specific domain of a scholar’s expertise (e.g., novels of a particular century and nation). In this short, speculative talk, Liu asks: what could be gained methodologically and theoretically by deliberately hybridizing domains–for example, pairing any two or three kinds, periods, or nationalities of materials in a controlled way? What would be involved, in other words, in giving digital humanities corpora some of the mixed quality of their uncanny doubles (alike yet dissimilar): “archives” in the strict sense and “corpora” in the corpus linguistics sense?
            The talk concludes with a presentation of aspects of the 4Humanities.org “WhatEvery1Says” research project (topic modeling public discourse about the humanities) that bear on the theme of cross-domain knowledge.


“N + 1: A Plea for Cross-Domain Data in the Digital Humanities.” Keynote Panel on “Data, Corpora, and Stewardship,” Digital Humanities at Berkeley Summer Institute, University of California, Berkeley, 17 August 2015.

  • Abstract: In experimenting with text analysis, machine learning, visualization, and other methods, digital humanists often study materials collected from specific segments of the human documentary record–for example: a study corpus consisting just of one of the following at a time: novels, poems, letters, newspapers, historical maps, crime records, political speeches, etc. Such corpora also tend to be tuned to the specific domain of a scholar’s expertise (e.g., novels of a particular century and nation). In this short, speculative talk, Liu asks: what could be gained methodologically and theoretically by deliberately hybridizing domains–for example, pairing any two or three kinds, periods, or nationalities of materials in a controlled way? What would be involved, in other words, in giving digital humanities corpora some of the mixed quality of their uncanny doubles (alike yet dissimilar): “archives” in the strict sense and “corpora” in the corpus linguistics sense?
            The talk concludes with a presentation of aspects of the 4Humanities.org “WhatEvery1Says” research project (topic modeling public discourse about the humanities) that bear on the theme of cross-domain knowledge.


Citation: Research Report: ”How Public Media in the U.S. and U.K. Compare in Their Terminology For the Humanities.” WhatEvery1Says Project, 4Humanities.org. (3 August 2015), DOI: 10.5072/FK2FN18G5G

Excerpt

While assembling a study corpus of public discourse in English about the humanities (since about 1990 when newspapers began fully digitizing articles), the 4Humanities “WhatEvery1Says” Project (WE1S) encountered the following questions of linguistic usage:

  • How are the humanities referred to in newspapers, magazines, and other media in the U.S. compared to the U.K. (and other Commonwealth nations)? Especially, what from a comparative perspective is the overlap/difference between the terms “humanities,” “liberal arts,” “arts,” and “the arts”?
  • Do the proportions of such terms change over time in each nation?
  • Most practically, which terms (“humanities,” “liberal arts,” “arts,” and “the arts”) should the WE1S project use for searches in newspaper API’s and other resources as it locates texts for its corpus? (Since public discourse in newspapers, magazines, and other media is too ample to be collected in toto, WE1S aims to collect just what might be called the “neighborhood” of discussion of the humanities. The project will then apply text analysis methodology to this neighborhood to refine its understanding of the way the humanities are discussed.)

The following is a preliminary study focused on comparing linguistic usage in the U.S. and U.K.  It is conducted by Alan Liu with assistance from other members of the WE1S research team and the co-leaders of 4Humanities.org. The study will be extended and revised as WE1S research continues.

 

“Digital Humanities and the Reorientation of the Humanities Knowledge Space.” Keynote talk for Expert Meeting on Spatial Discovery, UC Santa Barbara. 18 June 2015.

  • Abstract: I am participating in this “Spatial Discovery” event not as an expert in spatial research, linked data, or libraries but instead as a “digital humanist” offering a reflection on some of the themes of the event from a humanist perspective. With the advent of digital media and collections, the traditional “knowledge space” of the humanities disciplines has been eroding. At the interface, that space consisted of such spatially organized structures as the “page,” “book, ” and associated finding aids. More foundationally, the knowledge space of the humanities depended on the tacit orientation provided by place-based collections and the spatial-juridical architecture of archives (with their hybrid physical-conceptual notions of the “archival threshold,” “respect des fonds,” “original arrangement,” etc.). This talk considers methods and practices in the digital humanities that at once further the erosion of the knowledge space of the humanities and attempt to reconstitute that space in new ways, including through maps, network, and provenance structures serving as way-finding aids.


"The Humanities in the Digital Age"

First page

Citation: Liu, Alan, and William G. Thomas III. “Humanities in the Digital Age.” Between Humanities and the Digital. Ed. Patrik Svensson and David Theo Goldberg. Cambridge, MA: MIT Press, 2015: 45-40.

[Go to Course Site] This course works on two parallel tracks:
          On one track, students are introduced to methods and tools of the digital humanities–text encoding, data-mining and text analysis (including the cutting-edge approach known as “topic modeling”), social network analysis, mapping, and visualization. These provide extra leverage when reading individual texts or small collections of texts, and really come into their own when reading materials that literary interpretation previously had no way to handle–e.g., “big data” collections of texts or hybrid collections of texts (e.g., novels and newspapers of the nineteenth century). Each class in the early weeks of the course will introduce students to concepts, methods, and tools in the digital humanities, and require “practicums” in which students experiment with the tools in exploratory ways.
          On a second track, the course is an experiment in collaborative project-making. With the help and supervision of the instructor, we’re going to use the new digital methods to make a class project demonstrating the digital reading of literature…. [more]

“Key Research Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” Center for Information Technology lecture series, UC Santa Barbara. 30 April 2015.


“Key Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” Bucknell University. 27 April 2015.

  • Abstract: How do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities, and vice versa? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”


“The 4Humanities Initiative.” Bucknell University. 27 April 2015.

“Against the Cultural Singularity: Toward a Critical Digital Humanities.” Texas Digital Humanities Consortium conference, University of Texas at Arlington. 11 April 2015.

  • Abstract: Following up on the question he asked in the title of his 2012 essay “Where is Cultural Criticism in the Digital Humanities?”, Alan Liu will present early drafts from a book he is writing that imagines a mode of cultural criticism appropriate and native to the digital humanities. His talk focuses on the role of technology in, and between, neoliberalism’s major “knowledge work” institutions (including higher education) as the context in which digital-humanities research and development can be redirected from being primarily instruments of institution work to becoming also ways to act on institutions and their wider social impact. What methodological framework can assist the digital humanities in exploring that context? What kinds of scholarship, projects, and tool-building might constitute a critical digital humanities?
  • Storify of Twitter live-coverage of the talk by Adeline Koh (@adelinekoh).
  • Storify of Twitter posts from the #TXCHC conference by Jody Bailey (@reffervescent)

 

“Against the Cultural Singularity: Toward a Critical Digital Humanities.” History and Theory of New Media lecture series, Berkeley Center for New Media, University of California, Berkeley. 5 March 2015.

  • Abstract: Following up on the question asked in the title of his 2012 essay “Where is Cultural Criticism in the Digital Humanities?”, Alan Liu will present early drafts from a book that imagines modes of cultural criticism appropriate and native to the digital humanities. His talk focuses on the role of technology in (and between) neoliberalism’s major “knowledge work” institutions (including higher education). Can digital-humanities research and development be redirected from being primarily instruments of institution work to becoming also ways to act on institutions and their wider social impact? What methodological framework can assist the making of a revisionary “enterprise technology”? What kinds of scholarship, projects, and tool-building might constitute a critical digital humanities?


“Against the Cultural Singularity: Toward a Critical Digital Humanities.” Mellon Research Initiative in Digital Cultures, University of California, Davis. 3 March 2015.

  • Abstract: Following up on the question asked in the title of his 2012 essay “Where is Cultural Criticism in the Digital Humanities?”, Alan Liu will present early drafts from a book that imagines modes of cultural criticism appropriate and native to the digital humanities. His talk focuses on the role of technology in (and between) neoliberalism’s major “knowledge work” institutions (including higher education). Can digital-humanities research and development be redirected from being primarily instruments of institution work to becoming also ways to act on institutions and their wider social impact? What methodological framework can assist the making of a revisionary “enterprise technology”? What kinds of scholarship, projects, and tool-building might constitute a critical digital humanities?
  • Jenae Cohn, “A Recap of Alan Liu’s Talk, ‘Against the Cultural Singularity: Toward a Critical Digital Humanities.'” (HASTAC blog post reporting on the talk, 12 March 2015)

 

“Key Trends in Digital Humanities — How the Digital Humanities Challenge the Idea of the Humanities.” Digital Humanities at Claremont Colleges (DH@CC) Spring Symposium, Claremont Colleges. 18 February 2015.

  • Abstract: How do such key methods in the digital humanities as data mining, mapping, visualization, social network analysis, and topic modeling make an essential difference in the idea of the humanities, and vice versa? Using examples of digital humanities research, Alan Liu speculates on the large questions that confront the humanities in the face of computational media–most importantly, questions about the nature and function of interpretive “meaning.”
  • Storify of the talk and symposium.
  • Video Video of talk (videography by AJ Strout) (1 hr., 18 min.)

 

"The Big Bang of Online Reading"

First page

Citation:”The Big Bang of Online Reading.” Advancing Digital Humanities: Research, Methods, Theories. Ed. Paul Longley Arthur and Katherine Bode. Palgrave Macmillan, 2014: 274-90.

  • DOI of book: 10.1057/9781137337016
  • Full text (open-access author’s pre-copy-edited final version in institutional repository, PDF)


Citation: Peter de Bolla, “Digital Knowledge: Format, Scale, and the Information-knowledge Parallax at the Total Knowledge Horizon — A Reply to Alan Liu.” 15 November 2014. https://liu.english.ucsb.edu/peter-de-bolla-reply-to-alan-lius-theses-on-the-epistemology-of-the-digital/

The following was written by Peter de Bolla of Cambridge University in reply to Alan Liu’s “Theses on the Epistemology of the Digital,” a solicited follow-up to Liu’s participation in the second planning consultation session of the Cambridge University Centre for Digital Knowledge (CDK). Held on 7 May 2014 at the Cambridge Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), the consultation session focused on “digital epistemology,” one of the two intended thematic strands of the Centre for Digital Knowledge. A previous planning consultation at CRASSH focused on the other intended strand of “digital society.”

At the consultation session that prompted Liu’s “Theses,” de Bolla initiated proceedings by reading a not-yet-published paper on the epistemology of the digital that served as a provocation for the proceedings.

(November 2015; posted here 15 November 2015)

It is clear, as Alan Liu points out in his “Theses on the Epistemology of the Digital,” that knowledge is not the first thing that comes to mind when we turn our attention to the digital. As he notes, we more commonly think of the digital as having considerable impact on the social, economic, political and cultural. This, it seems to me, is because our primary engagement with the digital is at the level of its effects and affordances with respect to communication, information storage and retrieval, statistical inference and manipulation. And the overwhelming majority of us use the digital as if it were analogue: the interfaces we are comfortable with simulate analogue forms. This is why the question of knowledge is so far back, so buried in our encounter with the format of the digital.

It is true that programmers take a different route but even in this case the question of knowledge is not particularly to the fore as programmers attend to system more than episteme. They are most concerned to get the ontology of their programmes to function efficiently. Perhaps this focus on “ontology” rather than “episteme” reveals a truth: the ontology of computation (where “ontology” is now being used in its philosophical sense) is order and sequence. In contrast to this sense “ontology” is used in programming to mean “a hierarchically organized classification system associated with a controlled, structured vocabulary that is designed to serve the retrieval and integration of data” [Ceusters and Smith, Switching Codes].

The Cambridge University Centre for Digital Knowledge (CDK) proposes to put the question of knowledge to the fore in our attempts to understand the difference that is made by the move from the analogue to the digital. This project is, at least initially, focussed on the difference in format. Seen from this perspective the alterations in the social, political, economic and cultural that arise when digital technologies become ubiquitous are epiphenomenal: these changes, important as they surely are, tend to obscure the alteration in episteme that occurs when analogue materials are migrated into digital format. This alteration is, I think, indicated in Liu’s observation that the “unique quality, or quantum, that is digital knowledge” involves “rebalancing the values of quality and quantity.” Put another way, data is knowledge in computational environments. The problem for us humans is that at the quantum level of data we are unable to perceive that knowledge. It is only at higher orders of data configuration that we are able to transform information into larger bits that we identify as knowledge. But to the computer this transformation is not one of type or category–a changing epistemology–it is merely one of scale. So, in concert with Liu’s remark that the CDK will need to “let go of too fixed an adherence to established modern ideas of knowledge,” we mean to push hard at the gates which sort information from knowledge.

It is manifestly clear that, as Liu points out, the landscape within which knowledge is produced and disseminated has changed significantly in recent times. There are new “systems, forms, and standards of knowledge” which pit “algorithmic” against “philosophical” knowledge, or “multimedia instead of print-codex knowledge.” The question for the CDK, however, is once again: are these changes epiphenomenal? Or, to put that more carefully, has our attention to these alterations so far been less directed to how epistemic shifts are at play than to the effects of these shifts. Liu’s attention to the flatness of knowledge in the realm of big or crowd data (in contrast to my “vertical axiology”) is, I think, directed at one of these effects. The distribution of information, its access and pliability all fall within a flat terrain where the “flatness” is the price of entry to the information. That is to say there is no–or very little–cost. But is that the same thing as a “flat epistemology”? As Liu puts it: “the wisdom of the crowd challenges the very notion of an epistemology.” Once more this seems to be a question of scale: is it the case that at this level of quanta one cannot see a theory of knowledge? Or, perhaps as important, at this level one does not need a theory of knowledge since the mass circulation of and access to information works as if it were knowledge.

I am not sure that this account is fully satisfying or convincing. Perhaps we need a more complex topography in order to see what is going on. If we were to plot the terms “information,” “knowledge,” “opinion” within a multidimensional space–a vector space–it might be possible to begin to see how these slightly different epistemic identifications overlap and connect, disconnect and repel. Here are some thought experiments. If one way of seeing knowledge is this: “knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so” (David Deutsch), and one way of seeing the wisdom of the crowd is “mass circulation opinion” then one of the vectors we would need in order to plot the topology would be time or persistence. Mass circulation opinion has, for example, a very distinctive temporality that might be represented as a wave form that corresponds to the volume of traffic at time t. Knowledge, in the Deutsch formulation, tends towards stability and might be represented as a flat line . The temporality of information might be represented as a set of discontinuities as older information becomes updated and replaced by more recent ?. A vector space model would seek to plot these data points within a planar matrix one of whose axes would be temporality. Another might be quantity. This would seek to plot the topology within which expert knowledge, the knowledge of the few, is distributed against mass circulation opinion, the knowledge of many. What measures would be helpful here? Would this help one to identify the range of measure x within which opinion tends towards knowledge? This, I think, might be close to Liu’s observation that knowledge “may not be either truth or story but just a probability distribution.”

If one were now to return the initiation question for the CDK–the issue of digital format–a similar observation might be made. The topology I have begun to sketch could equally be applied to the scalar issue. If one were to plot quanta of data in a vector space alongside persistence of accepted knowledge it might be possible to see more clearly how information at the very basic level of computational format (zero-one) contains within it knowledge, but knowledge that can only be seen at very high levels of data compression. Put this way knowledge decomposes into information at the quantum level. This seems to me to speak to Liu’s observation that “the humanistic and quantum universes of uncertainty are doppelgängers of each other.” It is a matter of scale. From the massive scale of knowledge in the humanities uncertainty appears as ambiguity; from the minuscule scale of data in the quantum world ambiguity appears as uncertainty.

This brings me to the question of the distinctiveness of the humanities. From whichever way I look at this issue it seems to me that we obscure too much when we maintain disciplinary coherence. The humanities are wedded to verification just as the non-humanities are. The humanities give considerable weight to accuracy just as the non-humanities. But in saying this we should not seek to make all knowledge wear the same overcoat. There are distributions within one kind of knowledge (call it humanistic) that are distinct from other kinds. But I am less convinced than some that these distributions are fixed. Still less am I convinced that they ought to stay fixed. By and large what we identify as humanities scholarship contains a mix–a distribution–of opinion, information and knowledge. Up until now the accreditation for this distribution of epistemic entities has been underwritten by a set of practices that give considerable cultural capital to individuals (so called experts) and institutions. What we might call single researcher accreditation protocols underwrite the opinions of literary critics. Outside the humanities the blend of opinion, information and knowledge is not quite the same. The hard sciences, for example, use the rhetoric of observer independent protocols for verifying information. Accreditation for opinions here stems from the repeatability of observational verification protocols. Single person cultural capital also applies, but it is tempered by mass and repeated observation. As long as the information persists and supports the opinion, knowledge is said to be gained.

It seems to me that what Liu identifies as the explosion of knowledge in the digital domain–“from crowds, people outside expert institutions, people outside formal organisations entirely”–consists in a remix of the distributions in a vector space that plots opinion, information and knowledge. This remix is not distinctively humanistic. In some ways it is “trans-humanistic.” Indeed we may be witnessing the decomposition of the humanities as knowledge decomposes into data or information. Put the other way around, we may be witnessing the recomposition of information as knowledge in the digital domain. And that applies to all types or kinds of knowledge, it is not unique to what heretofore we have called the humanities.

So let’s try and fit this around Liu’s sketch of a future humanities scholarship–one that has shed the protective skin of the “discourse model.” In place of the “one” reading and writing a book–the lone humanist–we will find collective and collaborative enterprise distributed in expansive networks of communication and experimentation. This, to my mind, will not be humanities 2.0 (or some such) but digital knowledge work. In this domain interpretation and critique are no longer centre stage (though there will be no necessary reason to jettison them entirely), occupying the foreground will be information, its gathering and manipulation so as to reveal what knowledge lies within the quantum level of information as data. Pattern recognition as much as pattern building will be the primary tasks for the agent who seeks to reveal this knowledge, and the machine (computer) will certainly have equal agential responsibilities in these tasks. There will doubtless be those who see such a sketch as irredeemably anti-humanist but the clear direction in which the contemporary is travelling seeks to make the division between the human and non-human far less monolithic. Things, animals and machines are more useful and friendly to humans when we begin to investigate the ways in which they may have or obtain agency or quasi-agency. The more we understand the quantum universe of biology the clearer it becomes that at the lowest level of life the domain of operation is digital. Pattern recognition and pattern building is what we, humans, do and are made of. To that extent the most humanistic enquiry is, therefore, digital. Once we accept this it becomes possible to see how much we have to learn from opening out to inspection digital knowledge, that is information held in a format that creates knowing from bits that are sub-opinion or only recognisable as knowledge at very high densities. It is not that we stand to gain an enormous revolution in what we know–that by definition will be impossible–but in how it comes to be known.

I agree with Liu that one of the ways this can be implemented is to build digital objects–hardware, software, algorithms and so forth–since this is a very testing methodology for exploring how the computer thinks. In this case there is a kind of “craft knowledge” that is only really accessible to those who make. But we should take care to insist that this is but one way. Others include embracing the techniques and technologies of advanced and advancing computer science–not the off the peg packages designed to fit problems already identified but the future horizon thinking that pushes at the boundaries of computation or the digital. What if we seek to produce a “total knowledge horizon” in which dynamic contextualisation operates at the largest scale of data repository and inspection? In a domain in which everything is potentially capable of finding relation with everything the task will be to sort out the noise from the significant signal. Although this may well involve building in the first sense above, it is as likely to involve building in the sense of conceptualising what digital knowledge might be at the “total knowledge horizon.” That’s the thing I hope we might aspire to build together.

Peter de Bolla has been Professor of Cultural History and Aesthetics at King’s College, Cambridge University, since 2009. He has been a visiting Professor at Siegen, Vanderbilt, New York University. He is Director of the Cambridge Concept Lab which is housed in the Cambridge Centre for Digital Knowledge at CRASSH (the Centre for Research in the Arts, Social Sciences, and Humanities at Cambridge University).

 


Works Cited

  • Ceusters, Werner, and Barry Smith. “Switching Partners: Dancing Witlogy in the Humanities and the Arts. Switching Codes: Thinking Through Digital Technology in the Humanities and the Arts. Ed. Thomas Bartscherer and Roderick Coover. Chicago: University of Chicago Press, 2011: 103. [Link to quotation in Google Books copy of book]
  • Deutsch, David. The Beginning of Infinity: Explanations That Transform the World. 2011; rpt. London: Penguin, 2012. E-book. Kindle Edition. Page 130. [Link to quotation in Google Books unpaginated copy of e-book]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

“Practice and Theory of ‘Distant Reading’: An Introductory Workshop on Digital Humanities Methods.” Experimental Humanities program, Bard College. 7 November 2014.

  • Web site for workshop with detailed agenda and resources.
  • Abstract: In this hands-on workshop and discussion, Alan Liu will introduce some commonly used analytical tools in the digital humanities—e.g.,
    • Google Books Ngram viewer & the Bookworm tool for exploring Hathi Trust texts;
    • Voyeur Tools, AntConc, and similar text-analysis tools;
    • Topic modeling tools;
    • Social network analysis tools;
    • Visualization tools.

    Participants will then try their hand at one or more tools, aiming not for mastery or even competence but just to capture an interesting “souvenir,” e.g., a screenshot. (For the purposes of the workshop, even failed attempts can produce an interesting souvenir.)

    Alan Liu will then lead a broader discussion based on the souvenirs about the opportunities and limitations of digital humanities methods. The largest question that the workshop will open to view is: how do digital humanities methods signal today’s changing ideas about the human world?

“Rediscovering the Humanities: Humanities Advocacy in the Digital Age.” Experimental Humanities program, Bard College. 6 November 2014.

  • Abstract: How can liberal arts colleges, teachers, and students make the case for the value of the humanities to the public today? Starting with the example of the 4Humanities.org advocacy initiative that he co-founded, Alan Liu will discuss strategies of communicating the values of the humanities in today’s society. A special emphasis of the talk is the promise of new digital technologies for public engagement in the humanities.


[Go to Course Site] Digital technologies and methods have recently become important in the humanities as scholars use the new tools not only to help read and write about literary, historical, and artistic materials in traditional ways but in new ways influenced–not just communicated by–the new media forms. Literature+ is a course that draws on the new fields of “digital humanities” and “new media studies” to ask students to think about, and experiment with, how new digital methods enhance the study of literature.
          Students choose a literary work and use digital methods to model, map, visualize, text-analyze, social-network-analyze, blog, or otherwise interpret it using new tools and media. How can such methods augment or change our understanding of literature by comparison with other methods of literary interpretation? What is the relation, for example, between “close reading” of literary texts and “distant reading” methods that identify trends in language or themes across thousands of texts?… [more]

[Go to Course Site]

“Advice for Chairs (Based on English Department Practice at UCSB).” New Chair Orientation Meeting, College of Letters & Science, UC Santa Barbara. 29 September 2014.


“Against the Cultural Singularity: Drafts For a Critical Digital Humanities.” Center for the Humanities Digital Humanities lecture series, University of Miami. 26 September 2014.


“Key Trends in Digital Humanities (and How the Digital Humanities Register Changes in the Humanities).” Center for the Humanities Digital Humanities lecture series, University of Miami. 25 September 2014.

 

Citation: “Theses on the Epistemology of the Digital: Advice For the Cambridge Centre for Digital Knowledge.” Alan Liu, 14 August 2014. https://liu.english.ucsb.edu/theses-on-the-epistemology-of-the-digital-page/

The following was written as a solicited follow-up to my participation in the second planning consultation session of the Cambridge University Centre for Digital Knowledge [later renamed The Concept Lab]. The session, held on 7 May 2014 at the Cambridge Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), focused on “digital epistemology,” one of the two intended thematic strands of the Centre for Digital Knowledge. A previous planning consultation at CRASSH that I did not attend focused on the other intended strand of “digital society.”

My theses below are meant more as provocation than as prescription; and they do not take account of plans that may have been put in place for the Center for Digital Knowledge since the planning consultations.

Response to this post by Peter de Bolla, Director, Cambridge Centre for Digital Knowledge [later  caleld The Concept Lab]: “Digital Knowledge: Format, Scale, and the Information-knowledge Parallax at the Total Knowledge Horizon — A Reply to Alan Liu” (15 Nov. 2014)

14 August 2014

Thesis 1: Enlightening the Digital

Establishing a Centre for Digital Knowledge oriented around “digital epistemology” will require a laser-sharp focus on making “knowledge” a productive framework for understanding the digital age. This framework must be robust enough to compete with such more common gestalts as “society,” “politics,” “culture,” and “economy” (represented in such phrases as “information society,” “surveillance society,” “social media,” “online culture,” “information economy,” etc.). The proposed Centre for Digital Knowledge can generate its agenda by deliberately harnessing the tension between knowledge (including ideals of academic knowledge shaped by the German research university model and the Enlightenment) and social, cultural, and economic understandings of the digital age.

After all, knowledge today is not intuitively the first thing that comes to mind when thinking about the digital, even in regard to such iconic artifacts of the Internet as Wikipedia that ought by rights to hew to the Enlightenment tradition of the Encyclopédie. Not only do Wikipedia’s “no original research” and “notability” principles abridge the idea of knowledge, but its most distinctive traits as a knowledge enterprise are characterized in social terms such as “open” and “community.” And this is before we even come to the identification of the digital with such knowledge-“lite” paradigms as entertainment.

For many, therefore, the digital is not primarily a mutation in knowledge. It is a social change. Social-science and other disciplines operating on this premise treat the digital as a phenomenon of “communication” (“ICT”: “information and communication technology”) impacting social practices, institutions, and organizations [example]. Or the digital facilitates political change. Political scientists or sociologists who study the Internet see it as a testing ground for new kinds of organizing, protest, voting, and other virtual realpolitik [example]. Or, again, the digital marks a cultural change. Disciplines such as “new media studies” and “network critique”–extending British, European, and American traditions of cultural criticism–treat the digital as a domain of contested identity, gender, ethnicity, ideology, affect, privacy, and so on [example]. And, yet again, the digital is an economic change. Economists and organization theorists (chorused by business journalists and business consultants) see the digital as a proxy for the postindustrial reorganization of capital [example].

Amid this clash of paradigms, the specific mission of a Centre for Digital Knowledge should be to illuminate–we may say, “reenlighten”–the knowledge overshadowed by other major views of the digital. Why is it, for instance, that business theorists discuss “knowledge work” in ways that say everything about work but almost nothing about knowledge [example]? What is the actual knowledge embedded in the society, politics, culture, and economy of the digital with their faux-knowledges of “information,” “wisdom of the crowd,” “knowledge bases,” “smart phones,” etc.?

The Centre for Digital Knowledge can design a sequence of events, activities, and outputs that foreground the specific force of digital knowledge amid digital society, politics, culture, and economy. For example, one cycle of Centre activities could focus on how the production and circulation of digital academic knowledge (or of specific “knowledge artifacts”; see provisional plan below) compares to crowdsourcing or social networking. A second could explore how new ideologies of scholarly open-access and open peer review compare to the politics of “open-source” and “open government.” A third could focus on the relation between traditional expert cultures (including but not limited to academic culture) and the new open-source knowledge cultures. And a fourth could focus on the uncanny convergence/divergence between the digitization of scholarly archives (e.g., of traditional restricted-access or closed-stack research libraries) and the economics of monetized proprietary databases (e.g., Google’s). All these cycles of activities would have in common the goal of sifting the sands of the digital for the unique quality, or quantum, that is digital knowledge (where rebalancing the values of quality and quantity is itself a problem of the epistemology of the digital comparable to similar recalculations of value in the social, political, cultural, and economic digital realms).

Thesis 2: Rethinking Enlightenment

But alluding to the Enlightenment forecloses as much as it discloses. An honest effort to grapple with digital knowledge will also require the Centre for Digital Knowledge to let go of too fixed an adherence to established modern ideas of knowledge (here simplistically branded “Enlightenment”). Those ideas are bound up with philosophical, media-specific (print, codex), institutional (academic and other expert-faculty), and “public sphere” configurations of knowledge that co-evolved as the modern system of knowledge. But today there are new systems, forms, and standards of knowledge, including some that refute or make unrecognizable each of the modern configurations mentioned above–e.g., algorithmic instead of philosophical knowledge, multimedia instead of print-codex knowledge, autodidactic or crowdsourced instead of institutional knowledge, and paradoxically “open”/”private” (even encrypted) instead of public-sphere knowledge.

In this light, Peter de Bolla’s incisive “provocation” paper on digital knowledge (presented 7 May 2014 at the start of the second planning consultation for the proposed Centre for Digital Knowledge held at Cambridge University’s CRASSH Center) is revealing for its frequent rhetorical reliance on two prepositions: “under” and “beneath” (used to query the foundations under or beneath the digital). Evidenced in this rhetoric is an inverted Platonic Divided Line that locates essential knowledge not high above but–in the modern tradition that runs from Kant’s “conditions of possibility” through Foucault’s “archaeology of knowledge”–deep below.

But it is unclear that the epistemology of the digital respects, or should respect, a vertical axiology of truth. Some of the most important dimensions of the digital extend laterally in networked, distributed, and other “inch-deep but mile-wide” formations. Big data or crowd data is bottom-up data, not high data (in the sense of “high church” or “high Latin”). In this regard, the Facebook-era cliché of “the social graph” is symptomatic. Used with the definite article in discussions of social networking, the social graph (commonly reified in visual graphs of nodes and links) has become the icon of a flat epistemology with just two secular dimensions (who knows whom) oblivious to any Platonic or Kantian higher dimension.

In the digital age, in other words, the “wisdom of the crowd” challenges the very notion of an epistemology, or philosophy, of knowledge. If we were to juxtapose the Enlightenment with the digital age, we might say that (a) the French Revolution paid quit to philosophy (and philosophes) by advancing a mob mentality that later nineteenth-century “historicists” (and twentieth-century revisionary historians of the Revolution such as François Furet) could only “know” by displacing the Revolutionary “idea” into notions of “spirit [Geist],” “rumor,” “representation,” etc.; and (b) the “digital commons” and “open” movement now represents the resurgence of a similar crowd knowledge challenging scholars. Then and now, the difficulty is that the object of inquiry puts in question the knowledge-standards of scholarly inquiry itself. Circa 1790, for example, people in Paris “knew” who was an “aristocrat” to be accused to the local Watch Committee because “everyone knows.” After 2000, with the onset of Web 2.0 and social media, people similarly know who the “celebs” are (not to mention more plebian “friends” and “followers”) because Facebook, Twitter, etc. know. Pity scholars who want to know what such “knowing” means but are constrained to rigorous older standards of “critical” knowledge that are like being the only person on Facebook who doesn’t “like” anything.

A similar incommensurability between old and new epistemologies applies in temporal terms. Instead of valuing enduring or permanent truths (the temporal version of “high” knowledge), the digital age is preoccupied with information of much shorter durations–time spans plunging down to the diurnal rhythm of blog posts, the microseconds of a data packet’s “TTL” (defined “time to live”), and even the gigahertz clock rate of a computer’s CPU. Originally, after all, Facebook and Twitter both prompted their users for “status updates” with variants of the hyper-immediate question: “What are you doing now?” Nor is it just a matter of the short durée but also of different temporal rhythms. Digital knowledge moves through computers and networks in fitful, robotic ballets of inhumanly precise starts and stops that fatally deform the early-twentieth-century Bergsonian intuition of flow and even the late-twentieth-century McLuhan intuition of media flow or field. Today the time of knowledge belongs to the invisible order of “micro-temporality” theorized by such media archaeologists as Wolfgang Ernst.

So, too, the incommensurability of digital epistemology can be formulated in terms of “uncertainty.” After all, digital knowledge often verges into or draws on stochastic processes that are native to our current scientific epistemology of statistical, probabilistic knowledge. Probability theory and the world view it models (e.g., the quantum-mechanical view of the universe) undercut the foundation of any knowledge that, in order to count as knowledge, needs definite subjects and predicates linked in narrative syntax of the sort that Boris Tomashevsky instanced in his definition of a thematic “motif.” Tomashevky’s example of a motif: “Raskolnikov kills the old woman.” To conform to today’s scientific world view, we would have to rewrite that sentence approximately as follows: “There is a 74% chance that in this document Raskolnikov kills (82%) / wounds (15%) / ignores (3%) the old woman (68%) / young woman (23%) / other (9%).” (Those familiar with “topic modeling” in the digital humanities and other digital research fields will recognize that such a recasting of “motif” makes it resemble the probabilistic “topics” generated by the MALLET topic modeling tool.) In other words, the humanities today have a hard time adjusting to the idea that knowledge may not be either truth or story but just a probability distribution. Even the “ambiguity,” “paradox,” and “irony” that were the highest evolutions of humanistic knowledge valued by the New Critics seem to exist in an alternate cosmos from the equivalent uncertainties of quantum mechanics. Not Cleanth Brooks’s well-wrought urn, in other words, but Schrödinger’s cat. The New Critics equated the paradox of “Beauty is truth, truth beauty” (the line from John Keats’s “Ode on a Grecian Urn” that so exercised Brooks in The Well Wrought Urn) with the full richness of human reality, which they also called “experience” in consonance with John Dewey’s contemporaneous philosophy of experience. In today’s scientific epistemology, by contrast, reality is defined by the collapse of the quantum wave front, as it were, into either beauty or truth, a binary decision state (consonant with the digital epistemology of 1 vs. 0) that nevertheless does not negate wonder at the unknowability of the paradoxically more real (but also less real because created from “virtual particles”) reality of the “quantum foam” underlying it all. The humanistic and quantum universes of uncertainty are doppelgängers of each other, incommensurable in difference and similarity.

In sum, there was knowledge; and today there are other kinds of knowledge that seem to come foaming up from the zero state of knowability not just in physics (and metaphysics) but in the epistemology of the digital–e.g., from crowds, people outside expert institutions, people outside formal organizations entirely, people from other parts of the world, and so on whose virtual knowledge seems as transient as virtual particles. That is one of the lessons of the digital.

Thesis 3: Decentering the Centre

A Centre for Digital Knowledge also needs to try out alternatives to the very form of an academic “centre,” since that form is vested in traditional ways of organizing knowledge production that the digital is currently reinvesting in a wider, differently articulated network of institutions, collectives, and media. “Neoinstitutional” theory combined with “adaptive structuration theory” (in the fields of sociology and organizational technology studies, respectively) help us understand how the digital facilitates changes in organizational and institutional structures, especially those oriented toward knowledge work. For example, Wikipedia, open-source communities, etc., evidence how the once hallowed institutions of “expertise” (professional work in corporations, professorial work in universities, professional journalism, etc.) are being repositioned by the new technologies in unstable relation to networked “open” para-institutions of knowledge outside settled organizational fields.

It thus seems clear that a Centre for Digital Knowledge that relies solely on traditional institutional forms–even the now normative “interdisciplinary” form (e.g., a centre that creates weak-tie intersections among faculty in different fields) will be cut off from some of the most robust conceptual and practical adventures of digital knowledge. A key test for the proposed Centre for Digital Knowledge, therefore, will be whether it is willing at least on occasion to accommodate non-standard forms of knowledge organization, production, presentation, exploration, and dissemination acclimated to the digital age or open to its networked ethos. Examples of such forms include “THATcamps” or “unconferences,” writing or coding “sprints,” design “charrettes,” online forums, events planned by non-academic invitees, cross-institutional collaboration (university to high school, university to newspaper, university to corporation, university to NGO, etc.), direct engagement with the public in online or face-to-face venues, and intellectual events planned not just by research faculty but also by teaching-first instructors, clerical staff, and students (to break down the divide between those tiers).

An additional desideratum is that the Centre should produce a replicable model for other academic (or hybrid academic/public-humanities) institutions, programs, and events that does not depend on the funding resources and “A-list” guest speakers of an elite university such as Cambridge. That is, the Centre should ensure that every event aspiring to be the academic equivalent of an Aspen Institute or TED Talks should be balanced by an event aspiring to be a THATcamp, beginner or early-career forum, project incubation workshop, regional all-institutions conference, or other forum that sows the seeds wide and far.

Thesis 4: Redesigning Discourse

In modern times, the academic production and dissemination of humanities knowledge have run in a well-known discourse pattern (OED: “discourse” from “discursus action of running off in different directions, dispersal, action of running about”). With some exceptions (e.g., co-editions), humanities scholarly discourse runs, mutatis mutandis, as follows:

Reading & Research arrow right Syllabi & Teaching notes arrow right Talks arrow right Articles arrow right Monographs.

Some traits associated with this program are dominant and others recessive. Solo agents of knowledge are dominant in the humanities. One reads and annotates a book; one designs a syllabus; one writes a paper; etc. By contrast, collective agency–the thick bunchings of academic life in meetings, reading groups, conferences, etc.–are recessive: either epiphenomenal (one would be writing that article anyway) or taken for granted as para-academic apparatus (e.g., the discourse between a scholar and editor that only occasionally comes to view in a book’s acknowledgements).

In terms of the acts rather than agents of humanities knowledge, interpretation and critique are dominant as the ends of knowledge, while observation and analysis are recessive as preliminaries to knowledge. Spanning in between are the acts of rhetoric and narrative that comprise the dispositio that William Germano (drawing on his experience as a former editor of humanities monographs) calls a book’s “throughline.”

Additionally, humanistic discourse has dominant and recessive styles. Through an act of introjection, many humanities scholars have come to believe that their dominant discourse should be of the same order of linguistic phenomena as their object of study. Since much of humanistic study concentrates on exceptional texts (e.g., literary works, pivotal historical speeches or documents), this means that higher value is ascribed to scholarly writings that at least to some degree are as resonantly crafted, nuanced, or elegant as complex literary language; as classically or biblically periodic as famous historical speeches; or otherwise as linguistically tour-de-force as some variant of the above. (Disclaimer: the present piece of humanistic writing is no exception, at least in its aims.) Even a humanities scholar’s spoken lectures are traditionally pre-scripted for high-pitch verbatim performance–an exercise that other disciplines such as the sciences and engineering view as bizarrely theatrical, not to mention fantastically inefficient for presenting data and conclusions.

Indeed, the issue of “data” in the humanities is increasingly acute in the digital age since it is a direct challenge to the privilege of high style. With some exceptions in fields like history, the humanities treat data as something to be embedded in discourse as part of the argument (or at least kept as close as a footnote or appendix at one remove). “Close reading” is an example of how the humanities fold data–the precise lines of poetry being interpreted, for instance–into argument. As a consequence, and by corollary with its stylistic ideal, the humanities create arguments that seem data-lite. After all, only so much concrete evidence can be folded into an argument without the prose taking on the poured concrete quality of many scientific or social-scientific articles with their masses of particulate citations–e.g., “Empirical studies adopting this social constructionist view of technology have been done by sociologists of technology (Bijker 1987; Bijker, Hughes and Pinch 1987; Collins 1987; Pinch and Bijker 1984, 1987; Woolgar 1985; Wynne 1988), and information technology researchers (Boland and Day 1982; Hirschheim, Klein and Newman 1987; Klein and Hirschheim 1983; Newman and Rosenberg 1985)” (source for this example [PDF]). Of course, the appearance of being data-lite belies the true heft and complexity of humanities data (where “data” here means low-level observational and descriptive information recorded in some structured pattern, as in the “images” or “paradoxes” Brooks accumulates in his Keats chapter in The Well Wrought Urn, whose title notably rejects the idea of explicit data: “Keat’s Sylvan Historian: History Without Footnotes”). First, there is a multiplier effect by which humanistic knowledge is attended by messy problems of missing, irregular, incommensurate, and ambiguous information that require much behind-the-scenes processing and adjudication (a post by Hugh Cayless on this issue). Secondly, much underlying data in the humanities is implicit. Data inheres in entrained reading practices such that the “what is your data?” question typical in other disciplines is normatively answered in literary studies: “here’s the book; do a close reading yourself to see if my interpretation is persuasive.” And data also inheres silently in the stability of a massive infrastructure of book collections, curatorial staffs, bibliographies, metadata, and other apparatuses–i.e., the whole order of data to which even simple humanities citations (e.g., “see Cleanth Brooks”) really refer. Humanities data refers to “all that” (background editing, archiving, reading practices and apparatuses) even when, as in Brooks’s case, it seems to wear on its sleeve few, if any, footnotes. So long as libraries, books, or reading do not change, “all that” can be left unspoken as assumed knowledge.

By contrast, the sciences and social sciences (especially branches of the latter focused on quantitative or empirical research) cleave the orders of data and of argument so that they can be managed separately. Data is channeled through closed or open datasets, databases, repositories, etc.; while argument appears in pre-prints, conference proceedings, and journal publications. This separation allows for the creation, processing, maintenance, and presentation of data as a distinct workflow–one that can acquire independent value and even generate its own research problems (as in recent work on computationally assisted “data provenance” [example, PDF]). Scientific and social-scientific data can thus be presented or otherwise made available autonomously for critical inspection–a fact demonstrated, for example, in recent arguments for and against the data validity of Thomas Pikkety’s Capital in the Twenty-First Century.

Humanities discourse has rarely needed to aspire to the same standards for making all its data explicit, shareable, and open to critical examination. “So long as the nature of libraries, books, or reading do not change,” as I put it above, there was no need. But today digital media are rapidly destabilizing the traditional evidentiary structure of the humanities and bringing it closer to that of the sciences. The digital humanities field is a leading example. There are no established humanities protocols for adequately citing even the moderately “big data” that advanced digital methods now tempt humanists to study–e.g., the 7,000 novels that Franco Moretti explores in “Style, Inc. Reflections on Seven Thousand Titles”; the 3,500 works of Irish American prose literature that Matthew L. Jockers mines in Macroanalysis; or the 21,000 articles from “seven generalist literary-studies journals” with up to a century of volumes each that Andrew Goldstone and Ted Underwood canvass in their “The Quiet Transformations of Literary Studies” [PDF]. Even outside the digital humanities, mainstream humanities scholars who work with any kind of digital material are now at sea when needing to quote or cite the increasingly important plenum of born-digital, dynamic, social-media, streaming, and other new kinds of resources. For example, how does one shoehorn into the MLA’s citation style for a Web resource–simply “Web,” void of URLs–any granular reference to a distinct structure or state of an online site, archive, or database?

The high style of humanities discourse, in sum, is increasingly under threat in a digital age that values information over style. Meanwhile, the more data-explicit “ordinary” humanities style of book prospectuses, grant proposals, personnel case reviews, research assessment reports, etc., remains recessive even as it becomes increasingly pervasive. Days and nights may be spent writing a grant proposal, for example, but the prose that emerges is never valued as the “real” voice of the humanist. This is a situation that is increasingly unstable as humanities scholars devote larger proportions of time to writing such works as reports for program reviews or research assessments. What the digital age seems to be telling the academy–an outcome that the humanities will need to adapt for its own purposes–is that the dominant/recessive relation between the language of a book and that of a report or proposal may need to be rebalanced. Nor is the rebalancing solely driven by intramural and administrative needs–part of the rise of “managerialism” in universities. “Public humanities” scholars and humanities advocates make a strong case for complementing humanities research with dissemination in “plain and simple” language [example].

What, then, should be the discourse of knowledge in a Center for Digital Knowledge? One thesis is that such a Center should embrace alternatives to normative humanities academic discourse as part of its very project of understanding the difference of digital knowledge. “Alternatives” does not necessarily mean abandoning the most distinguished features of humanities discourse–individually cultivated voices of eloquence feeling their way toward sustained, rigorous, and elegant or “edgy” interpretations of past and present phenomena. But it does mean diversifying and reordering humanities discourse so that its voice can join in a broader discursive cycle of digital knowledge.

What I mean may be elucidated through a hypothetical research scenario of a sort increasingly common among scholars collaborating with digital methods. Imagine that a major grant has been won to fund a cross-disciplinary, multi-year project entitled “Climate Change and Social Change.” The project’s mission is to correlate climate change with both historical and recent social, economic, political, and cultural impacts–e.g., impacts on the perception of climate (e.g., in the media), social demographics (e.g., mortality rates and migration patterns), monetary flows, political movements, and policy decisions. The promised deliverables are heavily digital: a dataset or corpus, digital tools and interfaces for researchers and the public, and digitally-accessible conferences, papers, and articles. Members of the project team include scholars in computer science, biology, epidemiology, sociology, political science, communication, anthropology, film and media studies, environmental history or literary ecocriticism, history, and literary studies or comparative literature. The operational procedure is a series of plenary meetings branching off into working groups and development “sprints,” all coordinated around a series of defined project milestones and deliverables.

One of the distinctive features of such projects in the digital age is that the breadth of disciplines involved is homologous with a condition of the digital itself: the fact that the object of study can be mutated into a common digital dataset and transformed into countless permutational views for treatment from different disciplinary angles. Thus there is no one primary discourse of knowledge agents, acts, and styles. Monographic publications written in high style by humanities scholars are on a par with such discourses dominating other disciplines as collaborative conference papers, datasets, prototype demonstrations, etc. Or, more accurately, the dominant discourses of different disciplines each take command at different phases of the overall cycle of knowledge production before receding to let other kinds of discourse dominate–the whole alternating sequence driving the process forward iteratively. Thus for example, individuals may drive the work in some parts of the cycle, and teams in others. Observation and analysis come to the fore in some parts of the cycle, and interpretation and critique in others (e.g., critical discussion that occurs at the beginning of the project to shape the mission, or midway in the project as a correction of preliminary results). And style modulates through the cycle accordingly–full-throated at some points, but collapsed to bullet points, diagrams, mockups, and “demos” at others. In this regard, the “provocation” paper by de Bolla at the second planning consultation for the Centre for Digital Knowledge is a perfect exemplum of high-style humanistic critical argument used tactically to start rather than finish a project. Ideally, the sum of all the phase-cycles of this discourse–in which the discursive norms of each discipline take the lead at different points–creates a whole greater than the parts.

The humanities, in other words, need not think that the discursive flow of “Reading & Research arrow right Syllabi & Teaching notes arrow right Talks arrow right Articles arrow right Monographs” is a linear path. Different segments of that traditional agenda can be broken out separately and inserted tactically into other phases of the overall collaborative act of knowledge production where they will have the most value. From the point of view of the humanities themselves, this thesis assumes its most radical form in two propositions. One is that in the digital age humanities scholars should be encouraged to complement their dominant discourse with other kinds of discourse–including challenging collaborative work, difficult and innovative acts of data collection and analysis, and research outputs such as published conference proceedings or online projects that do not sum up in a critical/interpretive monograph. The other proposition is that in the digital age humanities scholars should not be engaging solely in discursive acts at all. Instead, it is already clear in the field of the digital humanities–a leading edge of the humanities’ encounter with digital knowledge–that a gestalt-shift is underway that recasts acts of discourse as acts of “making” and “building.” In the digital humanities, the “epistemology of building”–realized through the building of digital projects, hardware DIY projects, media archaeology labs, etc., and theorized with the aid of such broader intellectual movements as the “new materialism”–is, as they say, a thing.

Thesis 5: Program for the Centre for Digital Knowledge

There are many possible ways the above recommendations could be built into a Centre for Digital Knowledge. Here, for example, is one program of activities that interweaves many of the above theses:

  • Imagine that the Centre for Digital Knowledge would organize itself for the first four years of activity around the question, “What will be the important new digital artifacts of knowledge in the year 2050, and what will their relation be to older digital or material artifacts of knowledge?” The year 2050 is chosen to provide an aim point that provokes imagination, but not one so far in the future as to encourage pure fantasy. The notion of “artifacts” (rather than “media,” “society,” “culture,” etc.) is chosen to anchor the question in the concrete and in building.
  • To address this question, the Centre for Digital Knowledge would recruit and provide fellowships for one or more cross-disciplinary teams of researchers (both senior and early-career, intramural and extramural)–e.g., several humanists, social scientists, and engineers, with at least one ethnographer and one administrator.
  • The team(s) would be given the following mission: design a digital artifact of knowledge for the year 2050, supported by research, mockups or prototypes, exploration of the intellectual premises and theory, speculations on economic and social viability, etc. In doing so, conduct activities that engage other kinds of institutions (e.g., high schools, corporations, the government) and the public; and at least on occasion plan activities that do not conform to established academic forms such as a conference or colloquium.
  • The ethnographer on the team would be given the mission: document the workflow, discourse patterns, etc. of the team(s).
  • The administrator on the team would be given the mission: note the kinds of activities, discourses, and outputs in the project that currently do not have a place in a university’s reward or hiring procedures; and draft a revision of personnel policy that finds a viable way to recognize those activities in a way that furthers the overall research and teaching strength of the university.
  • The final outputs of all the above would consist of traditional scholarly articles and research; an online site giving access to the project and its data as well as explanations addressed to the public; and publications on the project workflow itself.

As stated above, this is just one example program. Many other kinds of organization, activity, and output could be imagined that would allow the Center to explore, and enact, the epistemology of the digital. Whatever the program, the goal is to engage the topic of what it means to “know” in the digital age in a spirit of serious play–at once disciplined and exploratory of new paradigms.

Errata & Revisions

19 August 2014: Corrected to 21,000 the mention of 13,000 articles in Andrew Goldstone and Ted Underwood’s “The Quiet Transformations of Literary Studies” [PDF].

10 November 2014: Corrected to 3,500 the mention of 758 works of Irish American prose literature that Matthew L. Jockers mines in Macroanalysis (the latter was correct only for Chapter 8 in Jocker’s book).

25 January 2025: Updated URLs to examples and other references in the post by finding Internet Archive copies of, or alternatives to, resources that have disappeared since 2014.

Citation: “Theses on the Epistemology of the Digital: Advice For the Cambridge Centre for Digital Knowledge.” Alan Liu, 14 August 2014. https://liu.english.ucsb.edu/theses-on-the-epistemology-of-the-digital-page

Excerpt

The following was written as a solicited follow-up to my participation in the second planning consultation session of the new Cambridge University Centre for Digital Knowledge. The session, held on 7 May 2014 at the Cambridge Centre for Research in the Social Sciences and Humanities (CRASSH), focused on “digital epistemology,” one of the two intended thematic strands of the Centre for Digital Knowledge. A previous planning consultation at CRASSH that I did not attend focused on the other intended strand of “digital society.”

My theses below are meant more as provocation than as prescription; and they do not take account of plans that may have been put in place for the Center for Digital Knowledge since the planning consultations.

Thesis 1: Enlightening the Digital

Establishing a Centre for Digital Knowledge oriented around the “epistemology of digital knowledge” will require a laser-sharp focus on making “knowledge” a productive framework for understanding the digital age. This framework must be robust enough to compete with such more common gestalts as “society,” “politics,” “culture,” and “economy” (represented in such phrases as “information society,” ‘”surveillance society,” “social media,” “online culture,” “information economy,” etc.). The proposed Centre for Digital Knowledge can generate its agenda by deliberately harnessing the tension between knowledge (including ideals of academic knowledge shaped by the German research university model and the Enlightenment) and social, cultural, and economic understandings of the digital age….


« Previous PageNext Page »