How choreography can help robots come to life

Consider this scene from the 2014 movie, Ex Machina: A young nerd, Caleb, is in a dark room with a scantily clad femmebot, Kyoko. Nathan, a brilliant roboticist, stumbles in drunk and brusquely tells Caleb to dance with the Kyoko bot. To begin with, Nathan presses on a wall-mounted panel and the room lighting suddenly turns an ominous red as Oliver Cheatham’s disco classic “Get Down Saturday Night” begins to play. Kyoko – who seems to have done this before – starts dancing without words, and Nathan joins his robot creation in an intricately choreographed piece of pelvic thrusts. The scene suggests that Nathan is infusing his robot creation with disco functionality, but how did he choreograph the dance to Kyoko, and why?

Ex Machina may not answer these questions, but the scene does gesture towards an emerging field of robotics research: choreography. Choreography is essentially making decisions about how bodies move through space and time. In a dancer sense, choreography is the articulation of movement patterns for a particular context, usually optimizing for expressiveness rather than usability. To be in tune with the choreographies of the world, you need to be aware of how people move and interact in complex, technology-laden environments. Choreo-robotics (ie, robotics who work choreographically) believe that incorporating danceable gestures into machine behavior will make robots look less like industrial reasoning, and instead more lively, empathetic and attentive. Such an interdisciplinary intervention could make robots easier to interact and interact with – no small feat given their proliferation in consumer, medical, and military contexts.

While concern for the movement of bodies is central to both dance and robotics, historically the disciplines rarely overlap. On the one hand, the Western dance tradition is known to perpetuate a generally anti-intellectual tradition that poses great challenges for those interested in interdisciplinary research. George Balanchine, acclaimed founder of the New York City Ballet, famously told his dancers, “Don’t think, honey, do.” As a result of this kind of culture, the stereotype of dancers as slavish bodies better seen than heard has unfortunately been calcified a long time ago. Meanwhile, the field of computer science – and by extension robotics – is showing similar, albeit different, physical problems. As sociologists Simone Browne, Ruha Benjamin, and others have shown, there is a long history of emerging technologies that view human bodies as objects of surveillance and speculation. The result is perpetuating racist, pseudoscientific practices such as phrenology, mood reading software, and AIs that claim to know if you are gay by how your face looks. The body is a problem for computer scientists; and the field’s overwhelming response are technical ‘solutions’ that try to read bodies without meaningful feedback from their owners. That is, insisting that bodies be seen, but not heard.

Despite the historical divide, it may not be too far to consider robotics as choreographers of a specialized kind, and to think that the integration of choreography and robotics could benefit both fields. Usually the movement of robots is not studied for meaning and intentionality as it is for dancers, but roboticists and choreographers are concerned with the same fundamental concerns: articulation, extension, strength, form, effort, effort and power. “Robotics and choreographers want to do the same thing: understand and convey subtle choices in movement within a given context,” writes Amy Laviers, a certified movement analyst and founder of the Robotics, Automation and Dance (RAD) Lab in a recent Science Foundation-funded paper. . When roboticists work choreographically to determine robotic behavior, they make decisions about how human and inhumane bodies move expressively in the intimate context of each other. This differs from the utilitarian parameters that control most robotics research, where optimization predominates (does the robot do its job?), And what the movement of a device means or makes someone feel has no obvious consequences.

Madeline Gannon, founder of the research studio AtonAton, is at the forefront of her research into robot expressivity. Her installation commissioned by the World Economic Forum, hands, illustrates the possibilities of choreo-robotics in both its brilliant choreographic considerations and its achievements of innovative machine building. The piece consists of 10 robotic arms displayed behind a transparent panel, each stark and brilliantly lit. The arms are reminiscent of the production design of techno-dystopian films such as Ghost in the shell. Such robotic arms are designed to perform repetitive work and are commonly used for utility purposes such as painting car chassis. But when hands is activated, the robotic arms do not embody any of the expected, repeating rhythms of the conveyor belt, but instead appear alive, each moving separately to interact animatedly with its environment. Depth sensors installed at the base of the robotic platform track the movement of human observers through space, measure distances and respond to it iteratively. This tracking data is distributed throughout the robot system and acts as a shared view for all robots. When passers-by get close enough to a robotic arm, it will “look” closer by tilting its “head” in the direction of the stimuli, then come closer to intervene. Such simple, subtle gestures have been used by puppeteers for millennia to imbue objects with animus. Here it has the cumulative effect hands seem curious and alive. These small choreographies give the appearance of personality and intelligence. They are the functional difference between a haphazard row of industrial robots and the coordinated movements of intelligent packaging behavior.

.Source