In the project we will develop an installation that explores a future collaborative approach to distributed and participatory design. The project is based on a previous installation shown at the Seoul Biennale for Acrhitecture and Urbanism. It consists of a microphone, a video screen and a robotic arm. The visitors are invited to speak into a microphone. This simple act triggers a custom algorithm to generate, in real time, an evolving three-dimensional representation of the speaker’s voice. The live feedback on the screen creates an immediate learning loop, where the visitor, almost instinctively, experiences how to shape the virtual object by modulating his or her voice. Once satisfied with the object displayed on the screen, the configuration can be saved by the user. This in turn automatically activates the robot arm to carve the chosen shape out of a block of foam by means of a custom attachment. The process of extracting the “shape of a voice” out of a foam block simultaneously creates both the desired object and its negative form: the visitor is presented with the result of the exploration to take home as a souvenir of a possible future, while the residual imprints of the voices are aggregated via an algorithm in a sculptural wall. The result is an endless range of individually formed pieces, composed of a patterned surface that represents the physical/digital translation of the visitors’ sound inputs and individual voices, whether as words, cries, songs or simply a breath.
See also: http://ddu-research.com/communication-landscapes/
The installation is supposed to be further developed (novel ideas is module aggregations, new interfaces) to be in display at an architecture festival in Berlin in summer 2018.
The project is supervised by Prof. Dr.- Ing. Oliver Tessmann.
|Project available until||Only for IREP Spring, Project ends Dec 2018|