Just to be provocative in my opener here, I want to share with you a little news about our robot overlords: they test better at reading comprehension than us humans.

So, the truth is a bit complicated: we’re only talking about a particular AI model and only about .14 difference in the scores on the Standford Question Answering Dataset between that AI and the human participants. But it’s news like this that often sends people into a tizzy and feeds into larger narratives about the decline of reading and comprehension skills. And it feeds into concerns that machines will continue to usurp humans in the labor market, now moving into tasks that we’ve always thought required a conscious human agent. Those concerns are all valid.
Still, what if we stop making humans and machines binary opposites in perpetual conflict and revisit a little classic cyborg theory and consider whether viewing machines and humans as part and parcel of the same organism. Because aren’t they? We humans make machines – we are tool makers and users – they are what Marshall McLuhan termed “extensions of man” that allow us to do things that we can’t do (or do as well, as fast, as perfectly) without them. So why not think of machines as things that can help us read and help facilitate human comprehension?
I’ve mentioned this idea about tapping into the potential to read with machines (and not just on or through them) before, but the whole idea gained even more resonance for me after last week’s Willson Center Symposium on the Book’s keynote lecture by Dr. Jonathan Hsy of George Washington University. The lecture, titled Disability and Divergent Readers, brought Hsy’s medievalist expertise to bear on thinking about how different kinds of material technologies for readers with visual and auditory impairments from the late Middle Ages should make us consider what it means to read and how reading practices and literacies – and disability itself – are culturally and materially constructed.
And I got to thinking that Naomi Baron’s findings about our difficulties with reading onscreen suggest that we are constructing ourselves (our onscreen reading selves) as disabled – or at least not fully literate – when we recount our difficulty (or even inability) to read (immersively, deeply) digital texts. Of course, we are not literally disabled, but learning disabilities like dyslexia impair the ability to process written language and our onscreen reading difficulties are often presented in terms that evoke disability – visual difficulties, getting lost in text, impaired comprehension.
But we also know from Baron and Nicole Howard and many other researchers on reading and the history of reading technologies (and the writing technologies that foster them) that reading practices and our definition of literacy co-evolves with technology. What it means to read (and write and communicate generally) is also culturally and materially constructed, as the NCTE Definition of 21st Century Literacies demonstrates. This position statement from the major professional organization of literacy experts claims that “Active, successful participants in this 21st century global society must be able to” do a number of things that many people might not think of as literacy tasks (if we limit the conception of literacy to reading and writing ability), including, my favorite:
“Manage, analyze, and synthesize multiple streams of simultaneous information”
Which brings me back to AI and machine reading. I’m not sure it’s possible anymore to manage, analyze, and synthesize all the information we are assaulted with daily without technological assistance anymore. Maybe we need machines to help us find what it is we need to pay attention to. Maybe if we let AI read all this stuff and report back to us (since it’s getting so damn good at reading comprehension) so that we can actually do the more complicated work of making connections, seeing what machines can’t see in data (the stories, the implications, the possibilities), and finding the new questions that need to be explored (or the old ones that still need fresh eyes).
So what does all this mean for us as writers? Because you know that’s what it’s all got to come back to in Writing for the Web. Well, after thinking about Hsy’s lecture and our class’s last visits to the Special Collections Libraries where we looked at more weird artists’ books (and some zines and children’s books) from the Hargrett Rare Book and Manuscript Collection and visited the Digital Arts Library where we saw early examples of interactive computer games, old video games, and electronic literature that can only run anymore on legacy computer systems, I think that the issue for us as writers is how to put our readers to “work,” so to speak.
I think that writers have to think about how to engage readers more actively in the process of meaning making. We know that complexity and difficulty are productive – nobody wants to play a simplistic and easy game because it’s boring. So why not create texts that ask readers to put pieces together, to co-create the text with us? This article from Scientific American from 2013 (yeah, yeah, it’s five years old and everything’s changed since then – but not really), The Reading Brain in the Digital Age, not only does a nice job of summarizing a range of research on the problems of reading onscreen, but, in its last section, makes the argument that we’re not doing enough to tap into the unique affordances of the digital in creating digital texts. Author Ferris Jabr asks,
“why…are we working so hard to make reading with new technologies like tablets and e-readers so similar to the experience of reading on the very ancient technology that is paper? Why not keep paper and evolve screen-based reading into something else entirely?”
Great questions. And five years later I wonder why we’re still not doing enough to create something else entirely. That is why we’re doing this design project, Writing for the Web. Why I’m having you look at these ridiculously complex artists books and legacy hypertexts. They are texts that force “readers” to interact with them and to work cognitively to understand them. They are difficult in the most productive way because they make our brains work. Metacognitive processing. Higher order thinking. Let’s think, as writers, about how to create texts that generate those kinds of reading responses.