[ad_1]

The Final Cut’s new album Process was recorded in two places: a cavernous music studio in Berlin, and a Brooklyn dining hall during an immersive culinary experience in which sound was among the items on the menu. “With its swarming, chirping creatures and metallic thuds, it sounds like a cross between a distorted, futuristic version of one of the more patient strains of industrial and drone music,” writes a critic for the experimental music magazine Ear Wave Event. Somehow, the anonymous writer claims that the triangulation of Berlin, Brooklyn, and drone music pays homage to Italian culture . Process, if we’re to trust the critic, is a messy hodgepodge of instruments, recording processes, and cultural influences.

Related Articles

Portrait of Holly Herndon in her

But the Final Cut’s album doesn’t actually exist. The Final Cut doesn’t, either. They are  a fever dream written by artificial intelligence for Issue 5, the newest edition of Ear Wave Event, which came out in February. Since 2013, composer, writer, and academic Bill Dietz and multimedia artist Woody Sullender have co-edited the cerebral journal on “the sonic,” which primarily features essays that analyze sound with the language of art history rather than that of music journalism. Previous issues have focused on theories of listening, technologies for the distribution of sound, and sound’s relationship to sex and sexuality. The experiment of Issue 5 is a departure from Ear Wave Event’s usual fare. Dietz and Sullender fed a neural network thousands of articles from publications like Pitchfork, The Wire, and Spin, and then invited musicians to read the generated text and create music to fulfill the AI’s fantasies. The result is a strange, nearly incoherent music publication and a wildly imaginative challenge for composers.

An AI-generated cover for the fictional album Process.
Courtesy Ear Wave Event.

In response to the Process review, visual artist and musician Adam Cooper-Terán contributed a track called “Process #193,” a five-minute, grumbling, static-charged composition that begins slow (“the effect is as though the score were being played in slow motion, with subtle organs slowly piecing up the layers of a long star-shaped piece that seems, at first, to be made of ice”) and speeds up to a quicker, scratching rhythm suggested by the review’s conclusion: “as the score develops, the layers thaw and the underlying structure slowly shifts into new places.”

By clicking the large blue button that says “Submit Music” at the bottom of a review, musicians can take part in the project and build out fictional albums. There are two tracks accompanying the review of Maneuvering Themes & Modes of Production’s The Sound of Music. “Ghost Technologies,” composed by Boston-based artist and writer Georgina Lewis, sounds like chiming alien signals captured on a staticky, faulty radar. The other, “In Kelvin’s Distance,” crafted by Brooklyn-based sonic and visual artist Ian Epps, is synth-heavy and operatic.

Issue 5 is a fun experiment that skewers the language of the critic. Pitchfork, one of the neural network’s sources, built its reputation on the overwrought writing style of founder Ryan Schreiber, who started the magazine fresh out of high school with no professional writing experience. Schreiber’s prose was parroted by a stable of amateur musicians-turned-critics. Washington Post writer J. Freedom du Lac once described Pitchfork as “the hilariously snarky, oft-elitist, sometimes impenetrable but entertaining and occasionally even enlightening Internet music magazine.” The flowery, pseudo-academic verbiage of “artspeak,” so often bemoaned in the art world, plagues criticism in other creative industries, too.

An AI-generated cover for the fictional album The Culture Of The Nile.
Courtesy Ear Wave Event.

More than satirizing a style of writing that is so formulaic and impenetrable that the AI can approximate it based on a limited corpus, Issue 5 hints that critics might be replaceable. We tend to think that automation will eliminate jobs involving rote, monotonous tasks, while professions that require eloquence, critical thinking, and interpersonal skills will be safe. Algorithms and artificial intelligence work off a set of rules and cannot adapt to the unexpected nuances of human emotions. The machine, we believe, will always make the logical choice over an impulsive one driven by feelings. Thus, critics still have special value.

Ear Wave Event, however, foreshadows a world where even writers are obsolete. Already, algorithms write news by digesting data to spit out stock reports and play-by-play recaps of baseball games. They string together words to tick off the five W’s of journalism (who, what, where, when, and why) in objective, fact-filled stories. But Ear Wave Event’s neural network deftly imitates the process of forming opinions. It creates vivid descriptions, convincing the casual reader that it has the intelligence to identify themes, ironies, or contradictions in the music. Close reading, however, reveals the writing to be meaningless word salad.

And so Issue 5 is just a hint of what’s to come. The neural network is still imperfect. It often contradicts itself mid-sentence. The album Ventures—a supposed collaboration between real filmmaker Spike Lee and fictional “drum machine bassist” Tim Ainsley—has a solo track where “a single drum machine is turned into a trio with at least four other drummers.” In another review, the algorithm gets stuck in a repetitive, mind-numbing loop, obsessively reworking the phrase “the work of the physicists” to generate hundreds of words in a block of text that would never make it past a human editor in normal circumstances. But Dietz and Sullender let the mistake run. They can relax, knowing that their obsolescence is still a long way off.

An AI-generated cover for the fictional album Trost.
Courtesy Ear Wave Event.

If Ear Wave Event wanted to take this commentary on automation one step further, Issue 5 would have accompanied its generated text with generated music. Open AI, a research lab in San Francisco, has released MuseNet, a neural network that can create new music by imitating and blending styles of composers from Beethoven to Lady Gaga. But Dietz and Sullender were smart to invite musicians to contribute to the issue, giving composers a way to interact with the project, and inviting readers to discover new sound. An entirely automated arts ecosystem doesn’t need an audience.

Some reviews don’t have any tracks, but Ear Wave Event is still accepting submissions. Hopefully a musician will find the review for Cantata Dilemma’s almost self-titled album, Dilemma, and find inspiration in  the neural network’s descriptions of “music that seems to be fueled entirely by dread.”

[ad_2]

Source link