Some of the most iconic sounds in film are those of the future—often depicted as dystopic. The sound of the future, proclaimed by Italian composer Giorgio Modor in Random Acess Memories, is the synthesizer. Synthesizers were fundamental in shaping the soundscapes of science-fiction films in the 1980s. Films like Blade Runner (dir. Scott Ridly, 1982) and Akira (dir. Katsuhiro Otomo, 1988) are saturated with synthesized sounds that reverberate: their echoes bouncing off walls cluttered with neon lights and buildings cloaked in advertisements. But the sounds that gave life to the people, architecture, transportation, and gadgets of the future were very much the sounds of the present–the synthesized sounds of the 80s. 

Unlike the recorded noise-sounds of Musique Concrète, synthesized sounds are generated electronically with a synthesizer. The first synthesizer was developed at the turn of the 19th century and was a mastodonic machine that weighed seven tons. It wasn’t until 1964 that synthesized sounds began to proliferate with the advent of the smaller and more affordable Moog synthesizer. With pertinence to film, a new genre of soundscapes emerged that gave credibility to the outlandish near-futures depicted in sci-fi films.

Synthesizers changed the breadth, depth, and directionality of soundscapes by manipulating frequency and pitch in sound. Low-frequency sounds reach across and spread outwards; they convey weight and heaviness. High-frequency sounds are pointedly directional. Low-frequency sounds are advantageous to aural cohabitation—contiguous low-frequency sounds mesh, engendering rich, textured soundscapes. The polyphonic interplay between low-frequency and high-frequency sounds in sci-fi films resulted in grandiose soundscapes that were as arresting and intricate and complex as their mesmeric visuals. 

 

To speak of soundscapes is to think of aural layers—of how background sounds and foreground sounds assemble to create a soundscape with particularities endemic to place. Murry Schafer created the portmanteau notion of soundscape in the 1960s. And he defined descriptive criteria to differentiate the three characteristics of a soundscape: keynote sounds, signal sounds, and soundmarks. 

Keynote sounds are the quotidian sounds that fade because of familiarity; continuous conditioning to sounds enmeshes the listener, and these sounds disappear into the background. Keynote sounds, also called atmospheric sound in films, give continuity between scenes and are usually only perceived after they disappear. Like a droning sound that becomes apparent the moment it stops. Signal sounds are foreground sounds, the ones consciously registered and listened to; the high-pitch scream that stands out amidst the clamor of a crowd or a window crashing upon impact. While keynote and signal sounds are subjective, soundmark is the most elusive of the soundscape criteria because of its affective value. They are the sounds particular to a place or a community. Soundmarks are conjured in films through fabrication: situational and emotional associations are conferred onto sound when it begins to appear in patterns, connected to emotions and characters.  

These terms take on duality in film because the soundscape inhabited by the characters varies from the soundscape the viewer experiences. This is particularly true in films where the non-diegetic soundscape (sounds that don’t originate in the film) is as robust as the diegetic soundscape (sounds that emanate from the world depicted in the film). By only using synthesized sounds in Blade Runner, Greek composer Vangelis created a liminal space where the transition between diegetic sounds to non-diegetic sounds was ambiguous. The synthesized, amelodic sounds of dystopic LA were the scaffolding supporting the oneiric scores composed from synthesized sounds. 

It is peculiar to watch Blade Runner in 2021 when the foretold future of 2019 is now in the past—the noir atmosphere with scant a ray of light and the flying cars and the buildings that look like gilded pyramids was not quite the LA of 2019. Distilling Blade Runner to a movie that ponders the biotech ethics of creating smart, sentient technology would be erroneous. The visual intricacies and aural soundscapes are far more engrossing than the plot and character development, which pale in comparison. 

 

Rick Deckard (Harrison Ford) is a blade runner (bounty hunter) sent to hunt and kill replicants (AI androids) created as slaves to mine colonies on different planets. His task of killing replicants that have infiltrated Earth becomes ethical when he falls in love with one of the replicants and forces her to sleep with him, which is rape. In the end, Deckard is confronted with his feelings, having murdered or disposed of (depending on the side of the debate you are on) androids that, along with being highly intelligent, also showed signs of sentience. The ethical and moral questions raised in Blade Runner are unquestionably timely and urgent as our world becomes more digital and our technology more sentient. But the story that Blade Runner tells is in the visual and aural portrayal of the future. 

 

Vangelis famously live composed the soundscape for Blade Runner in his London studio using Yamaha CS-80 synthesizers to create his palette of electronic sounds. Scott Ridley sent Vangelis a rough cut of Blade Runner before he began composing, allowing Vangelis to compose as he watched—enabling him to design sound effects and compose scores that were in relation with the visuals. One of the most prominent sound effects applied to sound, dialogue, and the score was reverberation (reverb). Reverb, relatively new at the time, allowed Vangelis to delay the decay of a sound for up to seventy seconds. Reverb allowed the different sonic lives of architecture and markets and transportation to mesh into a cohesive aural atmosphere by extending the echoes of sounds. Environmental decay, often synonymous with dystopia, becomes palpable in the oppressive aural atmosphere of synthesized sounds in Blade Runner. Dystopia is where we will end up if we continue along the path our world is currently on. Much of our contemporary world features dystopic characteristics: environmental and human exploitation, systemic oppression and racism, police brutality and regimes of fear. If we can inhabit the future through speculation, imagining how non-dystopic futures might sound could provide insight into how they might look. 

Sources/ Further Reading

Frelik, Paweł. “‘I’ve heard things you people wouldn’t imagine’: Blade Runner’s aural lives.” Science Fiction Film and Television, vol. 13 no. 1, 2020, p. 113-118. Project MUSE muse.jhu.edu/article/750551.

Chion, Michel, and James A Steintrager. Sound : An Acoulogical Treatise. Durham, Duke University Press, 2016.

Ebert, Roger. “Blade Runner Movie Review & Film Summary (1982) | Roger Ebert.” Www.rogerebert.com, 1982, http://www.rogerebert.com/reviews/blade-runner-1982-1.

Kael, Pauline. “‘Blade Runner,’ Reviewed.” The New Yorker, 1982, http://www.newyorker.com/magazine/1982/07/12/baby-the-rain-must-fall.

“The World’s First Synthesizer Was a 200-Ton Behemoth.” Smithsonian Magazine, http://www.smithsonianmag.com/innovation/worlds-first-synthesizer-was-200-ton-behemoth-180970828/. Accessed 14 Feb. 2021.