December 3, 2014 · Print This Article
By Kevin Blake
“History is a nightmare from which I am trying to awake.” These famous words from James Joyce’s famous character Stephen Dedalus, in his famous novel, Portrait of an Artist as a Young Man, embody the idea of an inescapable infiltration of history in our present reality. For Joyce and Dedalus, the “nightmare of history” may have been rooted in an existential conception of history, or the universal fact of inevitable death, but the transcendent quality of the phrase acts as a directive in the form of an optimistic metaphor for rehashing history as a means of creating a better future.
George Santayana was more direct when he said, “those who do not remember the past, are condemned to repeat it.” At first glance, these collective bits of wisdom seem obvious. They seem correct. They seem true. And though I would agree, that the repeated fragments, phrases, images, and sounds of the past have the ability to communicate warnings for the future, our methods of assimilating these legends have just as conspicuously perpetuated the nightmare. The recitation of our traditions awakens us TO the nightmare, not from it. As we are wrapped up in contemplating evidence that either rejects or accepts the histories offered to us–through the vast network of delivery systems (books, the internet, television, etc….)–we become more in-sync with the legend itself. Thus, we succumb to visualizing the nightmare more clearly as an insurmountable obstacle. Through incalculable attempts to shed ourselves from our antecedents, we see our legends reinsert themselves more deceptively in each subliminal iteration in the present.
The paradox here is that rational thinking–as it is defined by those who have developed a science that diagnoses irrational thinking–has delivered us to the most preposterous circumstances. The problem lies in an inherited trust in the clairvoyance of tradition. Of religion. Of science. Of government. Of culture. To wake up from this nightmare that we’ve been led to believe is a dream, we must stop chanting the legend and start pursuing delusional thought.
Artists have always been at the forefront of delusional thinking. Leonardo da Vinci, now celebrated as a rare genius, was not considered an educated man by the standards of his day–he did not attend a university and he was not versed in Latin(both requirements in fitting this mold). His deficiencies–in the way of this criterion–limited his ability to read the classical texts, which may have been an advantage in his development as an artist, architect, and engineer. He was employing the scientific method in a multitude of arenas long before it became a staple of science. In a revealing passage in one of his sketchbooks da Vinci notes, “First I shall make some experiments before I proceed further, because my intention is to consult experience first and then by means of reasoning show why such experiment is bound to work in such a way. And this is the rule by which those who analyze natural effects must proceed; and although nature begins with the cause and ends with the experience, we must follow the opposite course, namely (as I said before), begin with the experience and by means of it, investigate the cause.”
Had da Vinci been educated and reading the natural philosophies of Aristotle (the then guiding principles of nature), he may have never arrived at such methodologies. It is through this ignorance that his ideas and process surpass accepted forms of knowledge and transcend time.
Science will downplay how long its own adolescence actually lasted, but it wasn’t until a 16th century intervention by Galileo Galilei that forced it to take a hard look at itself. The Italian astronomer and physicist Galileo was trialled and convicted in 1633 for publishing his evidence that supported the Copernican theory that the Earth revolves around the Sun. His research was instantly criticized by the Catholic church for going against the established scripture that places Earth and not the Sun at the center of the universe. Galileo was found “vehemently suspect of heresy” for his heliocentric views and was required to “abjure, curse and detest” his opinions. He was sentenced to house arrest, where he remained for the rest of his life and his texts were banned. Galileo wasn’t the only figure in history to be persecuted for their belief system–suppressing knowledge has been an active ingredient in human civilization since the beginning of recorded time.
In an age when information has never been more fluid, knowledge–in its infinite variety–is abundant. Available. Easy to acquire. Because of this access, no one can escape disenchantment. As Galileo’s delusional and sacrilegious idea about the earth’s place in the galaxy dissolved the idea of the cosmos as the locus of spirits and meaningful powers within the realm of religion, so too, does it empower the picture of the universe as governed by universal laws–laws written by the same authority whom condemned this revelation in years prior.
This marks the point in history when science co-opts the subversive position. Persecution (in science) evolved from blatant acts of abuse carried out by the church, to a dissolution of potential theoretical projections through brandishing theory as law. Science has become in an institution which thinks of itself as a meta-theory through which all ideas must submit through its system. Simultaneously, science is the beneficiary and catalyst of the disenchanted which creates a system of “rational” thought.
The scientific method, who claims its roots in rationalism, is actually conceived in the most scientifically irrational production. Rene Descartes, the founding father of modern science, created the scientific method after having a dream in which an angel appeared to him and told him that the conquest of nature would be achieved through mathematics and measurement. This isn’t the tale we are taught. Science typically delivers the primary observations. The gathering of data. The major insight. It rarely reveals the trace of the idea, which was the impetus to execute study and experiment.
Artists have developed a stigma over the great expanse of time. If an artist projects a theory developed through a visit with angels, no one is surprised. It is expected. It is as easily shrugged off as the outlandish theory itself. The focus of the artist’s ideas assumes an aesthetic echelon that more often than not, omits theory and potential through a relegation to discourse that is painfully self-referential. Art, in many circumstances, appears to be a conversation about itself that serves itself and its initiates.
Within this system, an aversion to delusion is born much in the same way it is perpetuated in science and religion. The properties may vary. The classifications, categories, and rankings may change and represent an entirely different value system on the surface, but the problem is equally polarizing. In all our defining social systems, we are required to make a leap of faith. We are required to believe in the universe exploding into existence from an unidentified singularity. We are required to believe that there is a bearded white man coaching earthlings from another dimension. We are required to believe that the art hanging in museums is the “stuff” we need to reference in determining the value of all future manifestations of the human brain which would like to gain entry into the big Art game.
To change the conditions of the big game, we must advocate for innovation. Real innovation occurs at the concrescence of delusional thought. It occurs when existing conditions prompt individuals to reject linear values which means to reject the idea that innovation can only occur within the realm of technology, science, and mathematics.
Oliver Wasow, an artist working in the Hudson Valley area of New York, has a peculiar art practice that has evolved from a background in traditional photography to what might be considered an ongoing social media experiment that utilizes photography as the foundation of social critique. The idea that it could be an experiment rather than simply an art practice is of particular interest to me as I spin this web. Whether this is the intent of the artist or not, doesn’t really matter. The results of his experiments, his daily practice, and the response to it-exist as a data set much in the same way as a census tells us something about the population.
Wasow’s delivery system is social media–predominately Facebook. Daily, he posts found photographs or images conjured in his studio and curates a thematic based on his personal interests. The art, however, is not committed solely to the aesthetic quality of the image. The innovation occurs in the ensuing dialogue. The comments section is where the magic happens. It is where people expose themselves through all manner of projections that reflect their interpretations of the image, their intuition, and their impulses–all of which say much more about the people who comment than the image itself. It is like conducting interviews with the masses without asking any direct questions. The curation element that classifies each photograph within an album of related visual elements seem to be the substrate that defines the line of questioning upon which his friends are given a platform to respond.
As an art object(s), Wasow’s social media production defies standard procedure and institutional critique. It toes the artificial line that defines what IS and what is NOT considered art, and it is in that boundary where I find the most interesting practices. Within the trajectories of defiance we may find the ability to detach from the “nightmare of history.”
In the preface to The Picture of Dorian Gray, Oscar Wilde notes, “We can forgive a man for making a useful thing as long as he does not admire it. The only excuse for making a useless thing is that one admires it intensely. All art is quite useless.” If admiration of aesthetics and the constant reference to tradition is all we have as a result of art production, then I would agree with Wilde wholeheartedly.
Maybe I’m delusional, but I think art can DO more.
December 2, 2014 · Print This Article
One of my duties as a Lecturer of Foundations at Northern Arizona University is to provide give tours to prospective students. In an email follow-up to one of these tours, I was asked about the viability of a career following an art degree, and how one might explain this career choice to one’s parents. Specifically, I was asked to elaborate on a conversation I had mentioned having with my father, who had been skeptical to say the least regarding my career prospects after graduate school, for which I was asking to borrow some money from him. The following is taken largely from the text of the email I sent in response.
There are a few viable strategies to making a living with an art degree. I certainly have friends doing the “move to New York and try to be an art star” thing, a few of them successfully. Most support themselves with jobs as waiters or gallery assistants etc. With even a BA, one can get work as a security guard or administrative assistant at a museum, gallery, or in an artist’s studio as an assistant helping to make the work. These are entry level jobs from which one can work their way up to a career.
The strategy that I know most intimately is teaching. This is a challenging but viable path, if you have the right temperament for it. Not everyone is well suited for teaching, and it is important to be sure it’s right for you, rather than treating it as the default answer to the question of “I’ve got my MFA; now what?”
Speaking of the MFA…The question of whether or not to go to grad school is debated within the art world, but it is an absolute necessity if you want to teach art at the college level. It’s also a big asset if you teach K-12 or at private institutions. Applying to grad schools is itself a big process, and scary. You may not get into your top choice, and you may not get into any school at all your first time applying. Some grad schools are expensive; others are fully funded and therefore free. This is of course a question for down the road, but I mention it because it was when I was making the decision to attend grad school that this issue came up.
The specific conversation came up with my father when I was applying to graduate school. I needed to borrow money from him, and he was basically not at all supportive of my decision to go to graduate school and pursue a career teaching art. He said, basically, “I’ll loan you the money because I’m your father, but I think it’s a bad investment, I don’t think you’ll be able to find work, and I don’t think you’ll be able to repay me, but I need you to, somehow.” He asked me to specifically ask my faculty how long it had taken them to find a teaching job, and what was a normal starting salary. I asked my painting instructor, Leslie Kenneth Price at Humboldt State University, and he told me that after graduating from his MFA, he found adjunct work within a year, and it took him five years of adjunct work to get a full time job. He said that starting salaries at the full-time level were around $40K.
I ended up borrowing $37,000 from my father, in addition to $73,000 in student loans, to attend the Hoffberger School of Painting at the Maryland Institute College of Art. I graduated in May of 2007. I started teaching part time in 2008, and in 2013 I was hired at NAU…five years after my first adjunct gig began, at a rate of $42,000 a year. Obviously everyone’s experience is different, but Leslie definitely called it in my case. Five years as an adjunct, then a full time job starting at $40K, sounds about average.
Bear in mind that some people do land full time teaching jobs straight out of graduate school. Benjamin Duke was a year ahead of mine in grad school; he was doing a kind of work that really leant itself to a particular program’s needs, and so he was offered a full time teaching job at the University of Michigan in Ann Arbor before he had even finished his MFA. He also shows at Ann Nathan, an excellent gallery in Chicago. He is a.) very, very good, b.) very, very lucky, and c.) very, very smart. I wouldn’t count on getting a full time job right away; even if you’re good and smart you may not be lucky. But, it could happen.
On the other side of things, it is certainly possible that you won’t end up teaching. Some people just aren’t well-suited to it, and find other lines of work. I have several friends who earned MFAs and then were offered technical or administrative positions at the institutions from which they graduated. These are certainly viable careers, and should be considered as good alternatives to teaching. Others work for museums or galleries, or in other creative fields.
For me, though, teaching has been a great fit. The pay isn’t going to make me rich by any means, but it is definitely enough to live on, what I’d call “grown up money.” And there are other benefits as well. Great medical and life insurance, for example, and a great work environment. Yes, we work hard and have to do a lot of off-the-clock research, but our schedules tend to be very flexible, vacation time is impressive, and we get to work doing something we love. Oh, and another benefit: if you do take out student loans, the Public Service Loan Forgiveness Act means that, under certain conditions, if you work for a public service institution (a college, university, or museum, or a non-profit, but not a commercial gallery), and your income is under a certain amount, you can pay on an Income-Based basis and after 10 years, 120 on time monthly payments, any remaining loan balance can be forgiven.
Also, look at the College Art Association website and go through the job listings as though you’re looking for a job. That will give you some idea of what’s out there. Also NYFA, HigherEd Jobs, the Chronicle of Higher Education, and Academic Keys.
In one way, my father had been correct. I never did repay him a cent of what I had borrowed, despite the fact that Professor Price’s predictions about the time it would take to find full time work, and my starting salary. My father died, from complications of alcoholism, a few weeks before I was offered the job that would have allowed me to repay him what he had loaned me.
November games press (“ “) was ablaze (“ “) with reports and screenshots of the latest Assassin’s Creed game, which kind of yielded some amazing pictures:
I can’t get over how macabre and hilarious and terrifying it all is, (and Zach Budgor over at Killscreen outlines them better than I could) but also: how beautiful it kind of is. I remember once playing an old boxing game with one of my friends and I was Muhammad Ali, and I won, and then the victory camera, which was supposed to spiral around me as I danced or whatever, instead went directly into the crappy-rendition of Ali, inside of his face, and there were the insides of his skin, his eyes, his nose, a great nothingness where skull and blood and muscle and brain should have been.
It was terrifying, but we also hooted and hollered, because it was so exciting: here was this program that, most of the time, operated flawlessly, and who knows what actions we took caused it to do this? Were there any actions, or was it just some pure chance engagement? I’m confident that I’ll never know, and I’m confident in my satisfaction in never knowing.
Ubisoft, the company that made the game, apologized–but I can’t help but imagine a world in which they totally just owned it, offered it up as some extreme commentary on the state of technology or the series itself. (Assassin’s Creed is famously convoluted in its plot: you are some futuristic descendant plugged into a computer-esque thing that lets you re-live and play through the memories of your ancestors.) Wouldn’t it be great if the simulation broke, not in some predictable sense, but in the ways the medium can and does fail? Message and medium together, polygonal skin planes sticking out of void-faces.
The screenshots themselves made the rounds ostensibly because it was another example of a big game which shouldn’t have had bugs in it, but really, audiences are so used to this sort of thing, that in this context–big game, big oops–it’s hardly news at all, even in a world where all of the news is still about video games. (I recognize the irony of talking about them now.) I think what’s maybe so striking about them is that they look damn-near intentional, the glitches underlined by a world where everything else is lovingly crafted and animated. Even as games reach for the newer and newer generations of technology, these weird bugs are still there, lurking somewhere in the unseen code behind them, a kind of unchanging constant. It’s always fascinating to see something break in such an obvious way, yet still continue on as if nothing different had happened.
It brings to mind this old compilation from Skate 3, which is played for laughs (funny stuff compilation strikes me as a likely Kenneth Anger title), but take away the impulse to identify it as sheer physical comedy, and it becomes something more like a performance piece, its relative uniqueness impossible to know. There’s a poetic calmness to the way the skateboarding protagonist slowly slips into the earth and out of the game, only to be launched back into it and painfully contorted as if in punishment for abandoning its digital prison. Just seconds after, the skater flops like a fish into a wall and his head turns, slowly, around, and around, before his entire body disappears: the system rectifying a mistake. At five minutes into the video, a scene is recreated into infinity as though two mirrors were positioned at each other (or the more modern analogue, a camera looking at a screen of what the camera is seeing). The character’s jumping becomes fractal and synchronized over and over again with itself, and he’s reduced to nothing more but weird, fluid colors on an even stranger canvas.
I’m also reminded of Cory Arcangel’s Super Mario Clouds:
It’s not really in comparison, though, so much as it is in contrast: Arcangel’s piece is a meditation on reduction, taking away everything but that single detail of serene background and pixelated cloud blobs. The glitch art of Skate 3 and Assassin’s Creed are obviously not reductive, nor are they intentional. Instead, they appear to be a single piece of broken thing standing out against a mound of excess: in AC’s case, visuals, in Skate 3’s, mechanical. In all three instances, though, it is no longer about the player, or the game, but the singular oddity. Here, it says, unintentionally: look at me. I am a distraction in your distraction.
What better time to blow a Friday deadline for an article like this one than the breezy powdery end of November? Half of the art world was stuck on snowy flights or awkwardly explaining their hobbies/careers to nodding family members, while the other half is still cutting out lines in anticipation of Miami art fairs. The good news is that it’s been a slow couple weeks, so I’ll quickly blow through this month’s What You Should Have Noticed in November. [Read more]
The world grows colder. Nature slows, becomes static. The river connecting these cities ices over slowly, silently at night. Tires spin, stuck in ice ruts that will last until spring. Fewer bicyclists and pedestrians navigate the narrowing streets and sidewalks. We prepare to stay inside through longer nights, as the early arriving winter rudely awakens us from lingering fall. That stasis, that need to stay inside belies our need to connect, to draw close, especially in times of stress, in times of outside forces beating down our door, trying to force their way in. We need to be physically together to remember that beneath these layers are beating hearts and warm breaths.
Ryoji Ikeda’s superposition at the Walker Art Center united more than 20 projections and monitors, two live performers, multi-lingual Morse code, live video feeds, microfiche, a healthy dose of randomness. It confronts the body and mind, pushing them to the limits of comprehensibility. The audience was given earplugs to ease the high decibel audio, but the sound waves, the movement of air through the space physicalized every peak and valley of staccato clicks, blips, quantum particulates. My knowledge of quantum physics and mathematics is barely enough to bring the video and audio into focus. Scientific ideas bubble to the surface just enough to reveal there is something larger beneath the surface, but the technical mastery and deep knowledge embodied in the performance reinforce the barriers between audience members, reminding us that we are a part of systems whose logic is beyond what we think we know of Newton.
The performers, Stephane Garin and Amélie Grould truly bring forward the human nature, the warmth amidst the cold numbers and distant scientific concepts. They transform this digital symphony that exists in the rarified air of Ikeda’s ongoing scientific and mathematical investigations (including his current CERN residency), mathematics at scales that are impossible to witness and challenging to conceive, and dangerous sonic levels into a moving, human, even more visceral experience. As they key in Morse code, the competing, layering sound waves and words they spell are displayed behind them. The speed with which they relay their messages feels monumental to our distance from Morse code as a means of communication. Their use of a binary language lays bare the many layers of digital mediation, the code and signal behind the projections, the digital reproduction of sound. We see the text and sound waves they create on the massive screen behind them, but we also see their hands move; we see them strike tuning forks together, we see them make quiet decisions among their microfiche and steel balls.
Their presence in front of us, their bodies moving through the space on stage, creating the sounds that we feel in our chests and throats activate those parts of our brain that correspond to our hands, our fingers, our performative bodies. We feel ourselves on stage, mirroring their action, feeling their sensations as we negotiate our way through the sonic and visual density of superposition.
The phenomenon of our brain firing neurons in the parts of our brain that perform action when we see that action being performed is often invoked in the realm of sports spectatorship or action movies. We mentally and physically feel as if we are part of the game, as if we punched through a wall. superposition invoked those same feelings for me. It overwhelmed me physically and mentally, pulling me into its auditory and visual textures while activating idle parts of my brain. Seeing Dawn of Midi recently invoked those same feelings. Watching the repetitive, sound-bending striking, hammering, and twisting of their instruments, I felt the energy build, crest, relax, expand as if I was onstage, as if I muted the piano strings, I hunched over the bass, I held the drumsticks. Walking home through the snow, the music did not leave my mind, the instruments did not leave my hands.
As I navigate frozen landscapes, I contemplate the winter ahead. I consider not just my fragile human body but the end of the human species manifest in these extreme weather swings, the knowledge that this cold too is a sign of our own undoing that cannot be undone. Despair, stasis, and winter blues are eased by knowing I am not alone. I connect with others, physically and remotely present, and I remember that I can still make changes. I can still strive for a better world by refusing to be alone, by refusing to isolate myself against the overwhelming challenges we can only confront together.