Citation: Peter de Bolla, “Digital Knowledge: Format, Scale, and the Information-knowledge Parallax at the Total Knowledge Horizon — A Reply to Alan Liu.” 15 November 2014. https://liu.english.ucsb.edu/peter-de-bolla-reply-to-alan-lius-theses-on-the-epistemology-of-the-digital/

The following was written by Peter de Bolla of Cambridge University in reply to Alan Liu’s “Theses on the Epistemology of the Digital,” a solicited follow-up to Liu’s participation in the second planning consultation session of the Cambridge University Centre for Digital Knowledge (CDK). Held on 7 May 2014 at the Cambridge Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), the consultation session focused on “digital epistemology,” one of the two intended thematic strands of the Centre for Digital Knowledge. A previous planning consultation at CRASSH focused on the other intended strand of “digital society.”

At the consultation session that prompted Liu’s “Theses,” de Bolla initiated proceedings by reading a not-yet-published paper on the epistemology of the digital that served as a provocation for the proceedings.

(November 2015; posted here 15 November 2015)

It is clear, as Alan Liu points out in his “Theses on the Epistemology of the Digital,” that knowledge is not the first thing that comes to mind when we turn our attention to the digital. As he notes, we more commonly think of the digital as having considerable impact on the social, economic, political and cultural. This, it seems to me, is because our primary engagement with the digital is at the level of its effects and affordances with respect to communication, information storage and retrieval, statistical inference and manipulation. And the overwhelming majority of us use the digital as if it were analogue: the interfaces we are comfortable with simulate analogue forms. This is why the question of knowledge is so far back, so buried in our encounter with the format of the digital.

It is true that programmers take a different route but even in this case the question of knowledge is not particularly to the fore as programmers attend to system more than episteme. They are most concerned to get the ontology of their programmes to function efficiently. Perhaps this focus on “ontology” rather than “episteme” reveals a truth: the ontology of computation (where “ontology” is now being used in its philosophical sense) is order and sequence. In contrast to this sense “ontology” is used in programming to mean “a hierarchically organized classification system associated with a controlled, structured vocabulary that is designed to serve the retrieval and integration of data” [Ceusters and Smith, Switching Codes].

The Cambridge University Centre for Digital Knowledge (CDK) proposes to put the question of knowledge to the fore in our attempts to understand the difference that is made by the move from the analogue to the digital. This project is, at least initially, focussed on the difference in format. Seen from this perspective the alterations in the social, political, economic and cultural that arise when digital technologies become ubiquitous are epiphenomenal: these changes, important as they surely are, tend to obscure the alteration in episteme that occurs when analogue materials are migrated into digital format. This alteration is, I think, indicated in Liu’s observation that the “unique quality, or quantum, that is digital knowledge” involves “rebalancing the values of quality and quantity.” Put another way, data is knowledge in computational environments. The problem for us humans is that at the quantum level of data we are unable to perceive that knowledge. It is only at higher orders of data configuration that we are able to transform information into larger bits that we identify as knowledge. But to the computer this transformation is not one of type or category–a changing epistemology–it is merely one of scale. So, in concert with Liu’s remark that the CDK will need to “let go of too fixed an adherence to established modern ideas of knowledge,” we mean to push hard at the gates which sort information from knowledge.

It is manifestly clear that, as Liu points out, the landscape within which knowledge is produced and disseminated has changed significantly in recent times. There are new “systems, forms, and standards of knowledge” which pit “algorithmic” against “philosophical” knowledge, or “multimedia instead of print-codex knowledge.” The question for the CDK, however, is once again: are these changes epiphenomenal? Or, to put that more carefully, has our attention to these alterations so far been less directed to how epistemic shifts are at play than to the effects of these shifts. Liu’s attention to the flatness of knowledge in the realm of big or crowd data (in contrast to my “vertical axiology”) is, I think, directed at one of these effects. The distribution of information, its access and pliability all fall within a flat terrain where the “flatness” is the price of entry to the information. That is to say there is no–or very little–cost. But is that the same thing as a “flat epistemology”? As Liu puts it: “the wisdom of the crowd challenges the very notion of an epistemology.” Once more this seems to be a question of scale: is it the case that at this level of quanta one cannot see a theory of knowledge? Or, perhaps as important, at this level one does not need a theory of knowledge since the mass circulation of and access to information works as if it were knowledge.

I am not sure that this account is fully satisfying or convincing. Perhaps we need a more complex topography in order to see what is going on. If we were to plot the terms “information,” “knowledge,” “opinion” within a multidimensional space–a vector space–it might be possible to begin to see how these slightly different epistemic identifications overlap and connect, disconnect and repel. Here are some thought experiments. If one way of seeing knowledge is this: “knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so” (David Deutsch), and one way of seeing the wisdom of the crowd is “mass circulation opinion” then one of the vectors we would need in order to plot the topology would be time or persistence. Mass circulation opinion has, for example, a very distinctive temporality that might be represented as a wave form that corresponds to the volume of traffic at time t. Knowledge, in the Deutsch formulation, tends towards stability and might be represented as a flat line . The temporality of information might be represented as a set of discontinuities as older information becomes updated and replaced by more recent ?. A vector space model would seek to plot these data points within a planar matrix one of whose axes would be temporality. Another might be quantity. This would seek to plot the topology within which expert knowledge, the knowledge of the few, is distributed against mass circulation opinion, the knowledge of many. What measures would be helpful here? Would this help one to identify the range of measure x within which opinion tends towards knowledge? This, I think, might be close to Liu’s observation that knowledge “may not be either truth or story but just a probability distribution.”

If one were now to return the initiation question for the CDK–the issue of digital format–a similar observation might be made. The topology I have begun to sketch could equally be applied to the scalar issue. If one were to plot quanta of data in a vector space alongside persistence of accepted knowledge it might be possible to see more clearly how information at the very basic level of computational format (zero-one) contains within it knowledge, but knowledge that can only be seen at very high levels of data compression. Put this way knowledge decomposes into information at the quantum level. This seems to me to speak to Liu’s observation that “the humanistic and quantum universes of uncertainty are doppelgängers of each other.” It is a matter of scale. From the massive scale of knowledge in the humanities uncertainty appears as ambiguity; from the minuscule scale of data in the quantum world ambiguity appears as uncertainty.

This brings me to the question of the distinctiveness of the humanities. From whichever way I look at this issue it seems to me that we obscure too much when we maintain disciplinary coherence. The humanities are wedded to verification just as the non-humanities are. The humanities give considerable weight to accuracy just as the non-humanities. But in saying this we should not seek to make all knowledge wear the same overcoat. There are distributions within one kind of knowledge (call it humanistic) that are distinct from other kinds. But I am less convinced than some that these distributions are fixed. Still less am I convinced that they ought to stay fixed. By and large what we identify as humanities scholarship contains a mix–a distribution–of opinion, information and knowledge. Up until now the accreditation for this distribution of epistemic entities has been underwritten by a set of practices that give considerable cultural capital to individuals (so called experts) and institutions. What we might call single researcher accreditation protocols underwrite the opinions of literary critics. Outside the humanities the blend of opinion, information and knowledge is not quite the same. The hard sciences, for example, use the rhetoric of observer independent protocols for verifying information. Accreditation for opinions here stems from the repeatability of observational verification protocols. Single person cultural capital also applies, but it is tempered by mass and repeated observation. As long as the information persists and supports the opinion, knowledge is said to be gained.

It seems to me that what Liu identifies as the explosion of knowledge in the digital domain–“from crowds, people outside expert institutions, people outside formal organisations entirely”–consists in a remix of the distributions in a vector space that plots opinion, information and knowledge. This remix is not distinctively humanistic. In some ways it is “trans-humanistic.” Indeed we may be witnessing the decomposition of the humanities as knowledge decomposes into data or information. Put the other way around, we may be witnessing the recomposition of information as knowledge in the digital domain. And that applies to all types or kinds of knowledge, it is not unique to what heretofore we have called the humanities.

So let’s try and fit this around Liu’s sketch of a future humanities scholarship–one that has shed the protective skin of the “discourse model.” In place of the “one” reading and writing a book–the lone humanist–we will find collective and collaborative enterprise distributed in expansive networks of communication and experimentation. This, to my mind, will not be humanities 2.0 (or some such) but digital knowledge work. In this domain interpretation and critique are no longer centre stage (though there will be no necessary reason to jettison them entirely), occupying the foreground will be information, its gathering and manipulation so as to reveal what knowledge lies within the quantum level of information as data. Pattern recognition as much as pattern building will be the primary tasks for the agent who seeks to reveal this knowledge, and the machine (computer) will certainly have equal agential responsibilities in these tasks. There will doubtless be those who see such a sketch as irredeemably anti-humanist but the clear direction in which the contemporary is travelling seeks to make the division between the human and non-human far less monolithic. Things, animals and machines are more useful and friendly to humans when we begin to investigate the ways in which they may have or obtain agency or quasi-agency. The more we understand the quantum universe of biology the clearer it becomes that at the lowest level of life the domain of operation is digital. Pattern recognition and pattern building is what we, humans, do and are made of. To that extent the most humanistic enquiry is, therefore, digital. Once we accept this it becomes possible to see how much we have to learn from opening out to inspection digital knowledge, that is information held in a format that creates knowing from bits that are sub-opinion or only recognisable as knowledge at very high densities. It is not that we stand to gain an enormous revolution in what we know–that by definition will be impossible–but in how it comes to be known.

I agree with Liu that one of the ways this can be implemented is to build digital objects–hardware, software, algorithms and so forth–since this is a very testing methodology for exploring how the computer thinks. In this case there is a kind of “craft knowledge” that is only really accessible to those who make. But we should take care to insist that this is but one way. Others include embracing the techniques and technologies of advanced and advancing computer science–not the off the peg packages designed to fit problems already identified but the future horizon thinking that pushes at the boundaries of computation or the digital. What if we seek to produce a “total knowledge horizon” in which dynamic contextualisation operates at the largest scale of data repository and inspection? In a domain in which everything is potentially capable of finding relation with everything the task will be to sort out the noise from the significant signal. Although this may well involve building in the first sense above, it is as likely to involve building in the sense of conceptualising what digital knowledge might be at the “total knowledge horizon.” That’s the thing I hope we might aspire to build together.

Peter de Bolla has been Professor of Cultural History and Aesthetics at King’s College, Cambridge University, since 2009. He has been a visiting Professor at Siegen, Vanderbilt, New York University. He is Director of the Cambridge Concept Lab which is housed in the Cambridge Centre for Digital Knowledge at CRASSH (the Centre for Research in the Arts, Social Sciences, and Humanities at Cambridge University).

 


Works Cited

  • Ceusters, Werner, and Barry Smith. “Switching Partners: Dancing Witlogy in the Humanities and the Arts. Switching Codes: Thinking Through Digital Technology in the Humanities and the Arts. Ed. Thomas Bartscherer and Roderick Coover. Chicago: University of Chicago Press, 2011: 103. [Link to quotation in Google Books copy of book]
  • Deutsch, David. The Beginning of Infinity: Explanations That Transform the World. 2011; rpt. London: Penguin, 2012. E-book. Kindle Edition. Page 130. [Link to quotation in Google Books unpaginated copy of e-book]