There are 6,000 tweets sent a second. In the time you have read this sentence, 42,000 tweets will have been sent. At an average of 34 characters per tweet that’s 1,428,000 characters.
Worldwidewebsize daily estimates the size of the internet. On the day of writing, it amounted to 4.59 billion pages and a billion websites. This is the “indexed” internet, and doesn’t include the “dark web” or private databases.
The size of the web is measured in two ways. The first is “content” – storage capacity was estimated in 2014 as 10 bytes, or a million exabytes. The second is “traffic”, measured in zettabytes. Global traffic recently passed one zettabyte, the content of 250 billion DVDs.
More conventionally, the UK published 184,000 books in 2013 – globally, the largest number per inhabitant. Add the increasing ways of measuring a human being in terms of data – DNA sequencing, online family trees, genetic coding, bank accounts, online information of all kinds – or the amount of scientific data being produced and read around the world and the amount of information in the world is staggering. Even the amount of storage most people need for photos and documents has grown hugely in the past few years.
As a species, we are producing information at a massive rate. The “reading” of the mass of data has led to new predictive models for social interaction. Businesses and governments are scrambling to make use of this data as human beings seem ever more readable, manageable and – possibly – controllable through the comprehension and manipulation of information.
But just how might all this information be stored? At present, we have physical libraries, and physical archives, and bookshelves. The internet itself is “stored” on hard-disk servers around the world, using enormous amounts of power to keep them cool. Online infrastructure is expensive, energy hungry, and vulnerable; its longevity is also limited – see Die Hard 4.0 for a dramatisation of this.
Libraries of the future
The future of information storage may sound dull, but it is a crucial issue for anyone interested in the way that societies remember. A good example is family history, where public archives, such as census records and tax information, are increasingly accessed online. Millions of users around the world use subscription sites such as Ancestry or Findmypast to access this public information and to create their family trees using online software. This proliferation of information raises ethical issues about access (public records being used by private companies to make a profit) and about how this data is stored, managed and used.
We all have a stake in the way that libraries and archives might work in the future, how they might be configured, and what might be stored – and why. Do we really need to store every tweet ever sent? Making any kind of choice over what to store – what to collect, commemorate, archive – provokes a complex discussion. Technologies for accessing – “reading” – information need to be somehow futureproofed, or we will end up with huge amounts of information that cannot be used.
So: what to do? There are wide-ranging discussions at present, from what information to store (including various biobanks full of biological specimens), to how to store it, to where to store it (the Arctic, various locations in space, under water). Most of these discussions are occurring within scientific communities; some technological companies are involved. Those who have spent years thinking about memory, commemoration and archiving – historians and librarians – are often on the fringes of the discussion.
Nanocrystals and DNA
Various different organisations are exploring physical ways of storing humanity’s information. Physical storage on nickel disks (read by microscope) or laser-written barcodes on silica glass have been suggested. Highly experimental – and at present energy-hungry – nanotechnology looks to write information at the near-molecular level (although the use of the word “write” is very much out of date here). Nanotechnological storage would be “read” through sophisticated microscopy and is sometimes the “effect” of chemical change or quite complicated processes, such as nanocrystals converting radiation (infra-red) into something “visible”. Some of the more baroque storage models range from a flash data memory vault on the moon to private companies sending digital content to Mars, to satellites orbiting the earth.
But most of the activity at present seems to be biological. Various scientists have begun to explore the possibility of using DNA to store information, called Nuclear Acid Memory (NAM).
This would involve the data being “translated” into the letters GATC, the base nucleic acids of DNA. DNA strands would then be created which could be translated back into the “original” by being sequenced. Researchers recently stored archival-quality versions of music by Miles Davis and Deep Purple and also of a short GIF in DNA form.
DNA is durable and increasingly easy to produce and read. It will keep for thousands of years in the right storage conditions. DNA might be stored anywhere that is dark, dry, cold, and arguably would not take up a great deal of room.
Much of this technology is in its infancy, but developments in nanotechnology and DNA sequencing suggest that we will be seeing the applied results of experimentation and development within years. Wider questions arise about the ethics of collection and to what extent these processes will become mainstream. Print, and to a certain extent digital, have become common and reasonably democratic ways of transmitting and storing information. It remains to be seen whether future storage and writing will be as easy to access, and who will be in control of humanity’s information and memory in the coming decades and centuries.
This article first appeared on The Conversation. Read the original article.