In Which An American President Explains To An Android Why It’s Wrong To Shoot God With A Bazooka

March 3, 2014 · Print This Article

There is an episode of Star Trek: The Next Generation in which Lt. Cmdr. Data expresses to the rest of the crew his puzzlement at the human fascination with “old things.” The crew were probably trying to save some ancient ruins or encountering a relic from the past (probably a shoutout to the original series, like the wreck of the old Enterprise or something). It is, if you think about it, an odd notion. Why is something made a thousand years more interesting than something made yesterday? (With the penchant for clever, punny titles of panel sessions at CAA, if there hasn’t yet been, there will almost certainly eventually be, an art history panel called “Lascaux to Last Week,” probably about contemporary cave paintings or appropriating ancient imagery.) [Note: Apparently it's a book. I thought I'd heard that somewhere. http://www.percontra.net/archive/3lascauxtolastweek.htm]

Art History has had a couple of moments in the spotlight recently. The College Art Association conference just took place in Chicago, and for those in studio art fields who attend, it’s maybe more exposure to art history than we get, unless we actively seek it out, during the rest of the year. (The conference has a history of some animosity between the two disciplines; from what I’ve gathered it was more art history focused in the past, and in recent years studio art has been taking over, affecting everything from the book and trade fair to the location of the conference itself.)

The CAA conference isn’t universally loved, or even respected, by visual artists. My friend and colleague, painter Steve Amos, posted to Facebook: “Beware of the foul smell emanating from the South Loop; the pile of bullshit known as the College Art Association conference is in town.” (Posted February 14th to Facebook: https://www.facebook.com/steveamos/posts/10151952963102919?stream_ref=10.)

I didn’t ask Steve what he meant or why he felt that way, but I’ve heard the sentiment echoed among many of my friends, and may have said something along those lines myself, in a moment of frustration. Some of the hate may come from a frustration with the job market, and a treating of the conference as synonymous with the Career Services aspect thereof. The Interview Hall and Candidate Center are certainly geared towards job seekers. I know some people who have gotten jobs through interviews at CAA, and others who have gotten interviews. Personally, I’ve never been interviewed at CAA, though their career services have helped me in other ways: almost every job for which I’ve applied was listed on CAA (other listing sites include Higher Ed Jobs, The Chronicle of Higher Education, and Academic Keys), and their mock interviews and packet reviews helped me prepare for the application and interview process for my current position. (Since August of 2013 I’ve been teaching full time at Northern Arizona University.)

Another recent spotlight on art history was the film Monuments Men, in which some art experts get drafted into WWII to “tell our boys what they can and can’t blow up.” It was a true story (an interview with one of the surviving, original Monuments Men was featured recently on NPR), and a lot of masterpieces in European collections survive today only because of these men. (Others, such as an Italian monastery, were bombed out of supposed military necessity.) My friend and colleague, Chicago artist Renee Prisble, asked on Facebook (via Twitter), “Where were ‘The Monuments Men’ when we invaded Iraq?” (Posted to Facebook January 27th, via Twitter: https://www.facebook.com/reneeprisble/posts/10203102149818529?stream_ref=10.)

The Ufizzi Gallery in Florence during WWII. Sculpture, including Michelangelo’s David, are behind brick domes intended to protect them from bomb blasts and fragments.

It’s a fair question, one that was asked plenty at the time (or, rather, immediately after the looting of the museum), although mostly among the NPR set (myself included). There’s an image, I can still see it, of the facade of the museum sporting a hole created by a round from the cannon of a main battle tank. In this case the Americans clearly caused the damage by invading, even though it was primarily locals who did the looting (as opposed to the WWII example, in which invading Nazis themselves were the looters).

Two years earlier, just before 9/11, in the summer of 2001, the Taliban had used rockets and explosives to destroy the Baniyam Buddhas of Afghanistan, a resurgence of the age-old iconoclastic prohibition. Iconoclasm is based on Mosiac law (i.e. the Old Testament generally, and specifically the Ten Commandments), and thus is common to the history of Islam, Christianity, and Judaism, although within each faith sects vary widely in how literally they interpret this. Islamic Fundamentalism is among the most vehement, its leaders sometimes issuing death threats against people who depict Mohammed. The Taliban followed in this tradition when they chose to destroy the pair of 6th Century monumental sculptures of the Buddha, carved into a cliff face. (Mosaic law can be interpreted as instructing its followers not to make any representational imagery whatsoever, or more narrowly not to represent prophets and deities; in this case it was extended to destroying ancient monuments made my followers of another religion.)

The tragedy of this destruction is central to answering Data’s question: why was it such a big deal? Merely because the statues were old? Or because they were a symbol of a faith different than that of their destroyers, and we in the West have a live-and-let-live, relativist attitude? I don’t have the answer to this, but certainly our fascination with old things, as well as our respect for other cultures, is central to the role of art history.

It would be disingenuous to treat art history as totally synonymous with preservation. Certainly conservation, preservation, and repatriation of lost or stolen works is a role that requires the asssistance of an art historian. But the bread and butter of art history is study and interpretation. I described it in my own prediction for what I’d see at the College Art Association conference: “A bunch of new stuff is going to get queered, painting isn’t dead after all, and there’s going to be a hell of a lot of viewing things through the lenses of other things.”

Art History entered the spotlight on a national level very specifically a few weeks ago, when President Barack Obama, speaking at General Electric’s Waukesha Gas Engines, said to the audience that “folks can make a lot more potentially with skilled manufacturing or the trades than they might with an art history degree…Now, nothing wrong with an art history degree — I love art history. So I don’t want to get a bunch of emails from everybody. I’m just saying, you can make a really good living and have a great career without getting a four-year college education, as long as you get the skills and training that you need.” The audience chuckled along, and applauded at the end. But not everybody was amused. While there is no evidence that America’s art history majors are going to start abandoning Obama in droves, he did manage to draw some backlash from the College Art Association’s director Linda Downs, who issued the following statement in response:

The College Art Association has great respect for President Obama’s initiative to provide all qualified students with an education that can lead to gainful employment. We support all measures that he, Congress, State Legislatures, and colleges and universities can do to increase the opportunities for higher education.

However, when these measures are made by cutting back on, denigrating, or eliminating humanities disciplines such as art history, then America’s future generations will be discouraged from taking advantage of the values, critical and decisive thinking, and creative problem solving offered by the humanities. It is worth remembering that many of the nation’s most important innovators, in fields including high technology, business, and even military service, have degrees in the humanities.

Humanities graduates play leading roles in corporations, engineering, international relations, government, and many other fields where skills and creating thinking play a critical role. Let’s not forget that education across a broad spectrum is essential to develop the skills and imagination that will enable future generations to create and take advantage of new jobs and employment opportunities of all sorts. (http://www.mediaite.com/tv/watch-obama-slights-art-history-majors/)

It’s no surprise that the organization defends its own. But Obama’s remarks have some chilling implications far beyond the validity of an art history degree. Would Obama want his own children to go to a trade school to become skilled in a blue collar trade? Or is class segregation acceptable, with one definition of success for some, and another for others? The idea that an education in the humanities is a luxury implies…comedian Louis C.K. said it very well. Talking about Technical High School, he said, “That’s where dreams are narrowed down. We tell our children you can do anything you want, their whole lives. You can do anything. But at this place, we take kids that are like fifteen years old, they’re young, and we tell them, ‘You can do eight things.’”

Maybe in some communities this beats the alternative. Sure, being a welder beats being a drug dealer. (Well…I know some drug dealers who would disagree. Oh, don’t give me that look. That ‘friend’ you buy your weed and coke from is a drug dealer. But I mean, on the street level, it’s pretty high risk.) But it’s totally antithetical to our ideals of hope, ambition, social mobility, and whatever is left of the American Dream, if that was ever really a thing.

John Adams said, according to Fred Shapiro’s The Yale Book of Quotations), “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce, and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry, and Porcelaine.”

I’ve frequently heard this quotation used to argue, broadly, that times of scarcity or hardship are not the time to study the humanities. The quotation comes from a letter John Adams wrote to his wife Abigail Adams…on May 12, 1780. Over 230 years ago. Do the math. Okay, I’ll help:

John and Abigail had six children, over a ten year span. Three were daughters, of whom one was stillborn and another died before her second birthday. A third daughter lived long enough to give birth to four children, none of whom seem to have accomplished enough to merit a Wikipedia entry. John and Abigail also had three sons. Charles studied law before dying of alcoholism at the age of 30. Thomas also studied law (though apparently without much success), also struggled with alcoholism, and died deeply in debt (after fathering seven children). It’s hard to imagine John and Abigail even being able to claim with a straight face that they didn’t have a favorite child in John Quincy Adams. Instead of math and philosophy, he studied classics and practiced law before going into politics like his father.

John Quincy Adams and his wife Louisa had three sons (and a daughter, who were still pretty much treated as footnotes back then). Their first two, George and John, were trainwrecks on the level of their uncles Charles and Thomas, dying (one of suicide) in early adulthood. Their third, also named Charles, did somewhat better, carrying on the family tradition of diplomacy and politics. A fine pursuit, certainly making his father proud, but not the study of “Painting, Poetry, Musick, Architecture, Statuary, Tapestry, and Porcelaine” which the original John Adams had said he envisioned for his own grandchildren. (In turn, Charles Francis Adams, with Abigail Brown Brooks, fathered seven children, none of whom, so far as I could find, turned out to be painters, poets, musicians, or anything of the kind.)

The first John Adams was a soldier so that his children could be scientists and his grandchildren could be artists. But none of them were. They were all diplomats, military officers, lawyers, and politicians. I don’t know who their descendents today are. Google it if you’re curious. But I doubt there are many blue collar workers among them. Wealth is, after all, inherited, unless it’s squandered by some suicidal alcoholic like some of the Adams kids. I wonder, though, whether, twelve generations later, any of John Adams’ great-great-great-great-great-great-great-great-great-great-grandchildren are painters, poets, musicians, architects, sculptors, weavers, or ceramicists. And I wonder what he would say to hear our President essentially tell today’s parents (well, the poor ones) that they shouldn’t share the dream he had for his own descendants.

Disney’s Princesses Have Grown Up…A Little

February 3, 2014 · Print This Article

Warning: In this piece I talk about movies. I’m not sure what it has to do with art. Also, if you haven’t seen the Disney films Brave and Frozen, and you care about knowing what happens in them, you might go watch them before reading this.

Taking a look at American popular culture, originality looks to be on the decline. We live in the age of the remake, the cover, the mashup. Doesn’t a lot of new music sound like shitty covers of old music? (Or perhaps we’re just getting old; does every generation live its whole life thinking music hit its zenith when they themselves were teenagers, a cycle of criticism that repeats itself with each new generation?)

The problem appears most acute in cinema, and I’m not talking here about independent or foreign film, but in mainstream Hollywood. Not film, but movies. “Reboot” has become a household word in an entirely different context than restarting a computer; a series of movies is now a “franchise.” Star Trek and Spiderman have run through enough sequels that they just started over again at the beginning. Total Recall, Judge Dredd, and now Robocop have been subjected to entirely unnecessary (though in the case of Judge Dredd, interesting; Total Recall not so much) remakes. And even the “new” movies are just combinations of the old: Vampire Academy might as well be titled Twilight Goes To Hogwarts. The Legend of Hercules looks like 300 meets Gladiator, and while that sounds awesome, it’s not. Not at all.

I have been pleasantly surprised, then, to find some original storytelling in an unexpected place: Disney princess movies. I know, I know. I’m as skeptical of Der Maus as the rest of you, and deeply appreciated the humor (with a rich undercurrent of biting satire) in the Charnel House’s recent in-house production of…take a minute to appreciate this title…They Saved Hitler’s Brain…And Put It In Walt Disney. Hilarious play, so perfect. Performance was excellent. And when a company has such a stranglehold on a genre, when fairy tales have become synonymous with the company’s animated version and the originals, compiled from folk legends (mostly German) by the brothers Grimm, almost totally forgotten…Disney is an easy company to hate.

In its princesses, particularly, Disney has a long history of perpetuating harmful stereotypes, and standards of beauty, in this movies (and tie-in merchandise) marketed to young girls. Ariel looks like you could snap her in half at the waist. Jasmine…I’ve never asked a Middle Eastern woman what they think of her, but I can imagine it’s similar to how some ethnic Persians responded to seeing their race depicted in 300. Overall, the characters have been overly frail, meek, and utterly dependent on the male characters with whom they were besotted. Romantic love, we are told, is the woman’s…well, then adolescent girl’s, sole reason for existence. (The depictions have generally given us the idea that anyone who isn’t married by seventeen is an old maid.)

I’m making broad generalizations here, and to be sure, there are exceptions. In fact, I make these generalizations specifically to call attention to a couple of these exceptions. While still perhaps imperfect, the last two Disney princesses (that I’ve seen) have been markedly better role models.

The first was Merida, from 2012′s Brave. A female co-director (Brenda Chapman) may have played some role in the film’s treatment of its heroine, whose development included a lot of work on her relationship with her mother. The usual plot, of beautiful (basically skinny) princess meets handsome (muscular with a jaw like the 1998 version of Godzilla: http://www.imdb.com/title/tt0120685/) prince, is totally absent. In fact, while the common trope of an undesired-by-the-princess arranged betrothal is, as is often the case, the starting point of the film, Merida rejects the idea not in favor of a preferable relationship (usually based on superficial attractiveness) but rather to live her own independent life. Of all the Disney princesses, Merida was the first with whom I could really identify: strong, independent, a believable young woman, and with a more realistic body type than the usual sequined Barbie doll…at least until Disney fucked it up by tarting her up like JonBenét Ramsey (http://www.huffingtonpost.com/2013/05/08/merida-brave-makeover_n_3238223.html).

More recently, Frozen (still in theaters as of this writing) took an even more subversive twist on the usual princess-meets-prince story. I’ll warn you again, this plot has some twists and turns, and I’m about to discuss them, so if you haven’t seen it, and would rather not hear what happens, turn back now. While Brave was essentially a mother-daughter story, about a girl who wasn’t ready to settle down yet, Frozen was more of a sister story. And, while the protagonist of Brave wasn’t ready for a relationship, the princess in Frozen (like many young women) was all too eager to settle down.

There are actually two princesses in Frozen: the older, Elsa, who has crazy ice-magic, and the younger, Anna. The movie is essentially a story of the two sisters growing apart, and then the younger sister falling in love, and then everything going to shit. But a few interesting things happen along the way. The first is, when Anna announces that she’s in love, Elsa says what is perhaps the smartest thing any Disney princess has ever said: “You can’t marry someone you just met.” Fucking A. And what’s more, and here’s the spoiler, Elsa’s not just being an unromantic bitch here. She’s absolutely right. The dude, Hans, while apparently quite handsome and charming (the picture of a Disney prince), he turns out to be a scheming, murderous prick. Along the way, Anna meets a rough-around-the-edges type, Kristoff, who seems perfectly placed to take Hans’ place as Anna’s beloved. But that’s not quite how it plays out. It’s complicated, but basically the endgame is that the two sisters’ love for each other wins out, and romantic love takes a back seat. I was disappointed, of course, that the movie didn’t end with Hans killing Anna and then Elsa flipping her shit in a Carrie-like rage, impaling everyone present on giant stabby icicles of blood, but then…there’s a reason I don’t write for Disney.

Like Brave, Frozen is ultimately a feel-good kids movie, the kind of nepenthe parents administer to shut the kids up for an hour and a half, but that’s inherent to the medium. As kid-fodder go, Brave and Frozen are better than most of their predecessors. Is there a greater lesson here, for those of us outside the field of making animated films for children? Hell, I don’t know. But I’ll say this: Frozen gets a hell of a lot better once it’s been run through the creative filter of the Internet, which has already yielded two excellent spinoffs: the movie’s “hit single,” Let It Go, being performed in a plethora of languages (http://www.youtube.com/watch?v=ALUVJ_tyQ-E), beating Coke’s Superbowl commercial to the punch, and clips from the film rendered hilarious through the unnecessary censorship of innocuous lines of dialog (http://www.youtube.com/watch?v=q0v7rFSUrGE).

For The Ladies

January 6, 2014 · Print This Article

You’ve only got a few more days to catch Artemesia Gentileschi’s Judith Slaying Holofernes, on display through January 9th at the Art Institute (http://www.artic.edu/exhibition/violence-and-virtue-artemisia-gentileschi-s-judith-slaying-holofernes). The painting is not to be missed, on its own merits, but its content coupled with Gentileschi’s biography also invites a broader discussion on artists who are also women. I’d like to think that this conversation is over, that the playing field is level and we can all just be artists regardless of what we’ve got under our underwear, but reminders to the contrary are all to common: this month marks the one year anniversary of George Baselitz’s unfortunate remark to Spiegel online that “women don’t paint very well.”

Of course pretty much everyone with a pulse derided Baselitz for his opinion, and Sarah Nardi wrote an excellent piece for the Chicago Reader pretty much excoriating Baselitz with a side-by-side comparison of his work with that of some female painters (http://www.chicagoreader.com/Bleader/archives/2013/02/05/women-cant-paint-and-neither-can-georg-baselitz). Baselitz is old news by now, but it’s only a matter of time before someone else says something equally stupid in public, and we’ll have to have this conversation all over again. We could save ourselves a lot of trouble if everybody would just go and take a look at Judith, because it’s pretty much impossible to argue with.

One person I would really like to have had corner Baselitz in front of Gentileschi’s painting would have been Grace Hartigan, the late painter and director of the Hoffberger School of Painting when I was a graduate student there. Grace was a female painter in the male-dominated Abstract Expressionist scene, and she certainly held her own with the boys. Grace’s relationship with gender was a bit complicated; she once exhibited her work under the name George Hartigan. We asked her about it, but I never quite understood her reasons for doing that.

Hartigan once said something interesting about how for a long time she refused to participate in all-woman shows. Her reasoning was essentially that by participating in a show consisting entirely of women, she would have implied an acceptance that she couldn’t compete with her male counterparts. She seemed to have softened her views before her death in 2009; her work was included in an all-female exhibition curated by Leslie King Hammond which I saw in New York sometime between 2005-2007. I’ve curated an all-female show, myself, and I believe they can have value: for example, when the work has something in common other than the genetalia of its makers. Nevertheless, her argument has stuck in my memory.

While from time to time, a group show of female artists can present something drawn from a commonality of experience they share, or a common concern, it should by now be clear that women need no handicap to stand on their own as painters, or artists in any medium, in Chicago or anywhere else. While for most of history women have been treated like a “minority,” albeit one comprising 51% of the population, and I think John Lennon had something to say about this, in today’s Chicago art scene women are well-represented in just about any role there is to be played.

It doesn’t take any time at all to think of a female Chicago-based critic (Lori Waxman), gallerist (Linda Warren, Rhona Hoffman, Monique Meloche), or as we are all increasingly becoming, multi-role cultural facilitator (Michelle Grabner, Shannon Stratton, Claire Molek). Female artists, while I’m not going to do the math on what percentage of gallery rosters they form, certainly form at least half of my favorite artists in Chicago: Lauren Levato-Coyne, Jenny Kendler, and Deb Sokolow do amazing work; Noelle Mason, although she’s living and working in Florida now, cut her teeth in Chicago and still shows here.

If you’ve been to at least a couple of shows in Chicago in the past year, you’ve probably got your own favorite artists in mind, and odds are that more than a few are women. Some artists make work that isn’t particularly gendered; it could as easily have been made by a man as by a woman. In other cases, though, artists draw on their own gender, and the unique experiences that come with it. This is true of male artists as well as female. A recent example was Chicago painter Julia Haw’s “Pussy Power,” from last year. Artemesia Gentileschi’s “Judith Slaying Holofernes” is another piece that draws its power from its creator’s gender. It is impossible to separate Gentileschi’s biography from the image, especially when one compares it with treatments of the same subject by male painters (most notably Caravaggio). Its presence in Chicago is a rare opportunity to see one of the most important and powerful works of the Seventeenth Century, and there is no excuse not to see it. Wind chill temperatures that feel like fifty degrees below zero come close, but bundle up and make it out to the Art Institute in the next couple of days to see it before it’s gone.

A New Samarkand: Regionalism in the Age of Globalization

December 2, 2013 · Print This Article

Bazaar in Samarkand, an illustration from Jules Verne’s novel “Claudius Bombarnac”

I should say now that I have never been to Samarkand (in present-day Uzbekistan), and that my views of it have been shaped almost entirely by its mythical role in Clive Barker’s novel Galilee. A quick bit of slacker research, though, reveals the essential nature of that city to match Barker’s description pretty well. Situated on the Silk Road, Samarkand was a city of wonders, the ultimate crossroads, a center of commerce as well as of art and culture. People came from thousands of miles to experience the wonders of the city itself, but more so, to meet and trade with one another.

It sounds like the perfect sales pitch for globalization. What city wouldn’t want to model itself after old Samarkand? Open to all, a place where one can find anything, from anywhere, yet possessing its own unique character, its glories and wonders its own, Samarkand strikes in our imagination the perfect cross between melting pot and salad bowl.

Did Samarkand itself ever live up to this ideal? This is probably unknowable. The tendency to romanticize history is undeniable, and certainly our own cosmopolitan cities fall short of this utopia. Diversity is assimilated into a global monoculture which is then exported, and we end up feeding our client states the predigested remains of their own children. (Metaphorically speaking. For now.)

This cynical, CrimeThink version is also incomplete, of course. I’ve eaten Chinese food in Berlin and Ethiopian food in Baltimore. The first time I had a Big Mac was in Tokyo. I haven’t researched Taco Bell penetration in Mexico, because I’m afraid of what I’ll find, but I do remember walking past a bar in San Miguel de Allende and hearing a pretty badass cover of a Metallica song, the lyrics sung in Spanish. (I don’t remember what song but this was 1996, so most of the shitty ones hadn’t been released yet.). It is impossible not to think of William Gibson in these moments, and it has a surreal magic about it.

On the other hand, there is perhaps a danger in the ubiquity of the other. Is it a disincentive to travel, when so much of our destination has been brought to us on a plate? Does, in fact, this single-serving multiculturalism blend the rest of the world into the homogenously labeled “World Music” aisle of an obsolescent record store? (And reflect, if one goes into a music store in Beijing, does American pop go in the “World Music” aisle? Most likely not, and the reason is the problem. We have exoticized the others, even to themselves.)

Why travel, then, if anyone, anywhere, can buy a didgeridoo, a foo lion, and a Panang curry? “To see the place itself!” some argue, or “To meet the people!” And this is good, so long as it is remembered. So have fun in Miami, but remember, it’s just another art fair, unless you see the Everglades while you’re there.

Art fairs are a sort of microcosm of the Samarkand ideal in its imperfect manifestation, actually. I’ve written about them before as have many others, but never before in the shadow of the tents of the bazaars of Samarkand. Imagine! An art fair that stirred the senses with the sights and sounds and smells of the exotic! What Tony Fitzpatrick described in his play, of the grand market in Istanbul, a thousand guys chasing him down, shouting, “Pashminas!” And one guy shouting, “Tube socks!”

But we don’t get that, at least not at any art fair I’ve been to. (And to be fair, I need to make it to some international ones.) So far, what I’ve seen at American art fairs is pretty much the same roster of blue chip galleries selling to blue chip collectors, damn the locals, who cower in the shadows of the big boys. Exceptions, sure. I’ve seen great, unexpected work at art fairs. And some Chicago dealers have sold to out of town collectors at Art Chicago and at Expo. Local collectors do buy work (I have been on both ends of this transaction as an artist and as a small-time collector), but far too many of them are like the tourists visiting a Moroccan antiquities dealer I saw on Anthony Bourdain recently. “We call them penguins,” he said, waddling comically. “Their hands can’t reach their pockets.”

Homogeneity is the death of art. If a piece is expected, it’s pointless. Someone, I can’t now recall who, said, “If two artists are doing the same thing, one of them is unnecessary.” There is something to this. The old world of the Twentieth Century, the “Age of -isms,” decade-long proclamations of new world orders, each to be replaced by the next like the procession of coups in a string of Third World dictatorships, really ended with Pop Art. By the 1990s, Art History textbooks pointed to the future with a vague reference to pluralism and a prayer that wherever we were headed, Kenny Scharf wasn’t the one leading the way.

Pluralism, though, can become a homogeneity all its own. The art world embraces diversity not like Tamerlane (once the ruler of Samarkand) but like the Borg. “Your biological and technological uniqueness will be added to our own.” Less the great bazaar, and more a strip mall that had both a Taco Bell and a Panda Express. It is an arms race in which we each struggle to strip mine our culture and experience faster than our competition, and we find that global monoculture is a cloud with a lining not of silver but of Strontium 90.

So everybody knows the the fight is fixed, but what are you going too do about it? Revolution loses its luster once you’ve seen the sweatshops where they make the Guy Fawkes masks. And the obvious counterpoint to globalization, regionalism, has its own obvious failings. Living here in Flagstaff, Arizona, I see proof enough of that every day. Native crafts, particularly jewelry and ceramics, are strong here, but will always have to sit at the kids table of “fine craft,” that is when they aren’t called “outsider art.” Among the non-Natives, imitations of these styles run strong (as, it must be said, do very good and original creations in these traditional craft media). Photography? Sure, as long as it’s of a mountain. And God help you if you can’t sell a painting of a raven in this town.

I’ve been thinking a lot about the Hairy Who, the Monster Roster, and the Chicago Imagists. Chicago, I know, is sick to some extent of their legacy, if only because they dominated the local scene so heavily for so long. But these three related movements did something unique in their time, diverging both from the Modernist, Greenbergian Ab Ex that was the status quo at the beginning, as well as from the slick, clean Pop Art going on in New York. Chicago had, for a time, its own thing, as rare and exotic as a screeching monkey, an ivory carving, or a previously unheard of spice. This kind of regional movement with the teeth to hold its own on the global stage could emerge again, anywhere, in any city, any town, and if it did, might provide the kind of true diversity that could make possible a Silk Road of the art world, a bazaar of the unexpected, a new Samarkand.

What Costume Shall The Poor Girl Wear?

November 4, 2013 · Print This Article

Titling this post with a Velvet Underground quote, you might think I was going to talk about Lou Reed and his recent passing, but I’m not. That very worthy topic has been well covered by many others. Actually, it just seemed like a fitting quote, because I want to talk about costumes.

Of course Halloween has just come and gone, and that is the first thing most people think of when they hear the word “costume.” Costume, though, plays an important role in many aspects of life, including art. The word costume can be used to refer to any article of clothing or manner of dress. Usually, though, it implies something outside of the everyday. Depictions of historical costume is an important aspect of art history, whether it is the significance of the color of the Virgin Mary’s dress in an icon, the meaning of the steel gorget in a Rembrandt portrait (e.g. the one hanging in the Art Institute), or the absolutely pippin’ fur collar in Albrecht Durer’s later self portrait (as well as that prison striped number with the lace on sleeves in his earlier one).

In some contemporary art, though, costume takes center stage. Matthew Barney’s Cremaster films feature ornate and elaborate costume and makeup effects throughout. In some cases these merely reinforce characters, such as Richard Serra in his workmanlike coveralls, or the opera singer in her baroque gown. In other cases, the costume creates the character, particularly when prosthetics and makeup effects are involved. Specific examples include the woman with the glass leg, who is then transformed into an anthropomorphic cheetah, and Barney as faun or satyr. Makeup and costume also hit at the heart of Barney’s subject matter with numerous characters featuring prosthetically applied, bizarre genitalia. Their rubbery flesh evokes the rubber crotch demanded by censors for Linnea Quigley in her role as the punker chick Trash, dancing nude on a grave in Return of the Living Dead.

Some artists create costumes which transcend the body inside them to become wearable sculptures. The most obvious example is of course Nick Cave, whose “soundsuits” are frequently exhibited as static display objects. It could be argued that they reach their full potential only when inhabited, for massive group performances in which their sound making properties are harnessed, but most of us encounter them hung on armatures, evoking Bruce Wayne’s armor collection from Tim Burton’s Batman. They remind me in particular of the one that Alexander Knox (Robert Wuhl) called “King of the Wicker People.”

We all make decisions about our appearance on a daily basis. Our motives may include vanity, status, the desire to attract sexual partners, or an appreciation of fashion as an aesthetic experience. I’m known to those who don’t know me personally as “the guy in the kilt,” and while it started as a personal decision to wear something I thought looked cool, it has certainly helped to make my appearance more memorable to others as well. Incidentally, since moving to Flagstaff, I’ve been rocking the kilt 24/7. I mean, I take it off when I sleep, but it has been over three months since I’ve put on a pair of pants.

Some others in Chicago’s art scene have distinctive aspects to their appearance. My wife Stephanie Burke’s asymmetrical hairstyle (which I do for her) is one example. Anna Trier always wears two different earrings. Jenny Kendler was just voted Chicago’s best-dressed artist, a title I’ve attributed to her for years. Wesley Kimler has his bright red suit, invariably paired with paint spattered shoes.

Many others dress more or less like everybody else. I was once at an opening at Pentagon, and was surrounded by a half dozen artist friends of mine, each and every one of whom was wearing a flannel and blue jeans. They prefer to reserve their creativity for their artwork, apparently. Even if one doesn’t put much thought into one’s appearance on a daily basis, Halloween is an opportunity to reflect on the role of costume as an alternative creative outlet, at least once a year.