Jon Rafman and I had a chance to catch up in Second Life last week and do a series of interviews that culminated in the above video (which contains NSFW graphic imagery near the end). We discuss his recent work and its relationship to cinema studies, as well as talk about how the work digests contemporary Modern experiences.
I suggest that projects like Brand New Paint Job and Woods of Arcady operate as a kind of collision between High Modernism and amateur consumer technology, and that these fusions provide a unique critical comment on nascent mash-up cultures that exist online. Jon and I also discuss how his inclusion in jstChillin’s Avatar4D show in San Fransisco, and involvement with that emergent netart community, has influenced his artistic process. Jon comments on how his discovery of nasty nets rekindled his artistic sense of inquiry and how the mobility and quickness of blogs and surf clubs fostered a dialogue that he found absent from contemporary art circles he had participated in up to that point.
Later in the interview, I ask Jon if he finds that his new found sense of discovery of working online manifests itself in his (now highly popular) Kool-Aid Man tours in Second Life. The initial location for Jon’s journey and participation within these virtual worlds comes from the joy of spatial exploration and subsequent need for spatial mastery within 3D environments. We wrap up our conversation by discussing how working with Second Life, and developing real meaningful relationships within that environment, has led him to invest in the ideas of multi-user experiences as a means of engaging and analyzing multi-layered artistic paradigms within networks.
Jon’s Google Street View project will be part of the opening festivities tonight at the FUTUREEVERYTHING festival in Manchester and will remain open until the 23rd of May. You can also visit his site for more information: http://jonrafman.com/
September 18, 2009 · Print This Article
What was Archimedes famous quote? “Give me a place to stand to take enough photos and I can map the world” no but he might have
University of Washington’s Graphics and Imaging Laboratory, the researchers who built a lot of the code that went into the original Microsoft Photosynth software, have devised new algorithms that take the existing ability to create a rough 3d map from multiple photos up by a order of magnitude. Now it not only can do basic depth perception and skinning with photos but create pinpoint 3d skeletons if given enough data to pull from. The uses and implications of this are vast.
We just need to use the v1.0 and start rendering gallery openings in 3d
Hubble took the deepest look in the darkest patch of sky for a second time with even more sensitive lenses and measurements have predictably found the eternal quote to be true:
This time though it was able to use red shift relations to map the image in 3D.