Deep Reinforcement Learning for Autonomous Robotic Tensegrity (ART) (2019)
article⁄Deep Reinforcement Learning for Autonomous Robotic Tensegrity (ART) (2019)
abstract⁄The research presented in this paper is part of a larger body of emerging research into embedding autonomy in the built environment. We develop a framework for designing and implementing effective autonomous architecture defined by three key properties situated and embodied agency, facilitated variation, and intelligence.We present a novel application of Deep Reinforcement Learning to learn adaptable behaviours related to autonomous mobility, selfstructuring, selfbalancing, and spatial reconfiguration. Architectural robotic prototypes are physically developed with principles of embodied agency and facilitated variation. Physical properties and degrees of freedom are applied as constraints in a simulated physicsbased environment where our simulation models are trained to achieve multiple objectives in changing environments. This holistic and generalizable approach to aligning deep reinforcement learning with physically reconfigurable robotic assembly systems takes into account both computational design and physical fabrication. Autonomous Robotic Tensegrity ART is presented as an extended case study project for developing our methodology. Our computational design system is developed in Unity3D with simulated multiphysics and deep reinforcement learning using Unity’s MLagents framework. Topological rules of tensegrity are applied to develop assemblies with actuated tensile members. Single units and assemblies are trained for a series of policies using reinforcement learning in singleagent and multiagent setups. Physical robotic prototypes are built and actuated to test simulated results.
|
|
Year |
2019 |
Authors |
Hosmer, Tyson; Tigas, Panagiotis. |
Issue |
ACADIA 19:UBIQUITY AND AUTONOMY |
Pages |
16-29 |
Library link |
N/A |
Entry filename |
deep-reinforcement-learning-autonomous-robotic-tensegrity |