Keywords: #MeToo, award-winning, interactive installation, machine learning, digital fabrication, parametric design, digital audio, digital signal processing
Cacophonic Choir was awarded "Best in Show" in SIGGRAPH 2020 Art Gallery, and was nominated for the New Technological Art Award, NTAA '22.
Cacophonic Choir (in collaboration with Hannen Wolfe and Alex Bundy) is an interactive installation aimed at bringing attention to the first-hand stories of sexual assault survivors, and the ways such stories may be distorted by the media and in online discourse. The work is composed of nine embodied vocalizing agents distributed in space. Each agent tells a story. From a distance, the viewer hears an unintelligible choir of fragmented stories and distorted voices. As the viewer approaches an agent, the story becomes sonically clearer and semantically more coherent. When in the agent’s immediate personal space, the viewer can hear the first-hand account of a sexual assault survivor.
Digital and mass media can empower oppressed people by providing them with platforms for sharing their stories, as we have seen in the #meToo movement. Participation on these platforms can, however, also expose the stories to doubt, distortion, and hostility. For example, it has been found that on Twitter, tweets that engage in victim blaming get retweeted more than ones that support sexual assault survivors . Media coverage of sexual assault, especially combined with the hostility and distortion that one often finds on these platforms, can be overwhelming to survivors. Cacophonic Choir is aimed at both reflecting these feelings of being overwhelmed, and encouraging people to step away from these arenas to listen to individual survivors’ accounts. While sexual violence is a systematic problem, the experiences of those who have survived it are all different and deserve to be heard.
Designed to embody and reflect this feeling of inundation in the face of hostility and distortion, the installation highlights the first-hand stories of the sexual assault survivors. It is composed of nine embodied vocalizing agents distributed in space which, from a distance, all look alike. Their vocalizations are sonically distorted, semantically fragmented, and indistinguishable from one another, altogether forming an unintelligible choir. As the viewer approaches a particular agent, three things happen gradually. The given voice becomes sonically clear, the narration becomes semantically coherent, and the membrane that envelopes the agent gets brighter and more transparent, rendering the unique form inside visible. When in the agent’s immediate personal space, the viewer hears the first-hand account of a sexual assault survivor. Here, we are using spatial distance between the agent and the viewer as a metaphor for the ‘distance’ between the original story as told by the survivor and its renditions in social and mass media.
 Stubbs-Richardson, Megan, Nicole E. Rader, and Arthur G. Cosby. 2018. "Tweeting rape culture: Examining portrayals of victim blaming in discussions of sexual assault cases on Twitter." Feminism & Psychology, 28 (1): 90-108.
The data used for this piece consists of over 500 first-hand accounts of sexual assault survivors collected from The When You're Ready Project, an online platform for “survivors of sexual violence to share their stories and have their voices heard” . The aim of this installation is not to inform the visitor precisely of statistics and data about sexual assault, but rather to reflect the ways in which the stories may be amplified or distorted in online media. To this end, using the textGenRNN library, we used stories from The When You're Ready Project to train an LSTM (long short-term memory) recurrent neural network model. The idea was to capture the system at various levels of training, so that we could modulate the original narrative, generating versions of the narrative with different levels of semantic distortion. We used text-to-speech synthesis to convert the generated texts to audio. This helped us modulate the linguistic and auditory coherence of these narratives based on the proximity of the observer to the narrator. Using a proximity sensor, we mapped distances between the agent and the viewer to the different training levels of the RNN (Figure to the right) so that the full narrative is revealed only when one is in very close proximity to a given voice. The voice is also filtered based on the viewer’s proximity. The sonic response employs text-to-speech synthesis and granular synthesis to create a stuttering effect that dissipates as a visitor comes closer, representing how survivors' stories are distorted.
In addition to the sonic and semantic modulation, the installation also responds visually by illuminating from within. The body of each agent is composed of a sculptural form encased in a soft translucent membrane. Some of these forms are fully contained within the membrane, while others burst outwards. The proximity of the visitor modulates the light source within the membrane. As a result, the translucent membrane gets gradually more transparent as one approaches the agent, revealing the intricate geometric form within. Here, our intention was to reflect the fact that the individuals and their voices may look and sound alike from a distance, but when focused on individually, each is found to be complex and unique. Since opaqueness and transparency have strong connotations of privacy and publicness in many cultures, this simple light-based interaction, coupled with the material properties of the sculptural elements (i.e. transparency) allowed us to reflect the inherent tension in the public coverage of private events.
 Reid, Lauren. 2019. “The When You’re Ready Project”: www.whenyoureready.org/aboutwyr [Accessed 14 February 2021].