The first people I knew who were truly captivated by the net were Deadheads. These West Coast tripsters saw connected machines as the technology through which they could finally share hallucinatory spaces together. As psychedelics champion Terence McKenna once explained to me, virtual reality would be a place where “you can quite literally see what I mean.”
This notion of a shared visual landscape – a place where we can express ourselves through colors and shapes and vibrating lights – may actually be coming to a computer screen near you.
This all became to me as I watched a video depicting the way the nervous system responds to a concussion. The imagery, created by a data visualization company called KVS, showed how the impact part of the brain sends lightning bolts of energy and stimulus to other parts of the brain, which then do the same thing. Long after the actual injury, the brain is still lighting up different sections, frying various pathways to senses and organ systems. (you can see it yourself at LINK TK)
I’ve read a lot about concussions, but I never fully comprehended what was happening. Watching the video, I could finally see what the data meant – and experience it on a quite visceral level. It was as if the real communicative potential of this medium and its generative graphical capabilities, were being shown to me for the first time.
Remember, pretty much every new medium begins by using previous media as its content. The first textual stories were transcriptions of oral legends. The first radio shows were scripts read out loud. The first TV shows were stage plays filmed with a single camera.
Likewise, we used to think text was the language of the digital age. The command line, through which users told computers what to do, was everything – and it defined the way people conversed on bulletin boards and in chat rooms. The web turned it into pictures, podcasting imitated radio, and now more than 70% of all Internet traffic is video from services like Netflix.
What if the net is finally on the verge of discovering its own truly native content type? Not video, exactly, but a kind of dashboard through which people can depict visual objects and landscapes that convey meaning? It would be part data visualization – like those colorful weather maps that show climate change, or that concussion video I saw. But it would also be a language of its own – a live videogame palette of expressions with their own syntax. Think emoticons, but more dimensional: red could mean danger, jagged edges could signify man-made objects, where smooth ones are nature.
It turns out that’s what KVS is actually working on. They’ve gone and patented the whole idea of using “dynamic semiotics” as a way of communicating in digital spaces, and they’re busy working on it. They’re looking for the most widely shared perceptions of visual imagery in order to build a pulsing, living digital universe in which people can make and share meaning together.
The tricky part – what I don’t fully understand yet and I doubt they do, either – is how to make a visual world that is both based in fact, and open to impressions and emotions. In other words, part of this great big visual symphony can be as factually based as Wikipedia. A single pulsing and changing image can contain the entire NASDAQ in real time, which a trader could learn to navigate and interpret. Imagine the stochastics!
But the very same universal visual language would be capable of communicating hopes, dreams, emotions, and sensations. Such a communications system – a syntax – is the holy grail not just for Internet platforms like Google and Facebook, for whom text and video are essentially dead ends, but also for emerging immersive technologies such as the Oculus Rift headset, Google Cardboard, and the Xbox.
KVS founder Alan Yelsey thinks the big difference here is that we finally have the bandwidth and processing power for collaborative, participatory visualizations to happen intuitively and in real time, as if in a multimedia gaming world, “so that creators and users of content can freely and easily assemble and install models or variables and embed them with meaning, linked automatically to their source documentation and proof process.”
Could the future of the net look less like a stream of text or video updates and more like a dance floor with a spectacular light show emanating from the dancers themselves? See what I mean?
Sounds crazy. More like the ravings of a kid on ecstasy than the prognostication of a respected media theorist. Then again, so did the Internet.