This project teaches a 2D ragdoll to (almost) walk using reinforcement learning.
You can view the progress at differen't stages of training with the buttons above. You can also throw balls by clicking the animation or using the buttons.
The agent can move it's limbs in a realistic range of motion, it can feel the position of it's limbs and it's goal it to hold it's head upright and move to the right. The dark outlines are when the agent grips the floor, since I found walking was slippery otherwise. The balls are to provide obstacles.
- Reward: The agent is rewarded for moving to the right, keeping it's head above it's legs, and not bending it's limbs too much
- Actions: The agent can power motors that rotate each limb within a certain range of motion
- State: The agent can "see" most things about itself: each limb's relative position, global position, rotation, linear velocity, angular velocity, and orientation. Also each joints angle, speed, and motor speed
How does it work?
This uses use reinforcement learning
to teach the agent to walk. This is a branch of machine learning targeted at controlling systems over time such as systems of limbs or a self driving car. The agent is defined in 2d with a certain strength and range of limb movement. It then explores moving it's limbs and finds policies that maximise it's reward, which corresponds to moving right, keeping it's head up, and not bending it's limbs without need.
Training is done offline in tensorflow.js
. The aglorithm is Deep Deterministic Policy Gradients
with prioritized experince replay