<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
An assembly task is in many cases just a reverse execution of the corresponding disassembly task. During the assembly, the object being assembled passes consecutively from state to state until completed, and the set of possible movements becomes more and more constrained. Based on the observation that autonomous learning of physically constrained tasks can be advantageous, we use information obtained during learning of disassembly in assembly. For autonomous learning of a disassembly policy we propose to use hierarchical reinforcement learning, where learning is decomposed into a highlevel decision-making and underlying lower-level intelligent compliant controller, which exploits the natural motion in a constrained environment. During the reverse execution of disassembly policy, the motion is further optimized by means of an iterative learning controller. The proposed approach was verified on two challenging tasks - a maze learning problem and autonomous learning of inserting a car bulb into the casing.