Paper Reading AI Learner

Towards Learning Rubik's Cube with N-tuple-based Reinforcement Learning

2023-01-28 11:38:10
Wolfgang Konen

Abstract

This work describes in detail how to learn and solve the Rubik's cube game (or puzzle) in the General Board Game (GBG) learning and playing framework. We cover the cube sizes 2x2x2 and 3x3x3. We describe in detail the cube's state representation, how to transform it with twists, whole-cube rotations and color transformations and explain the use of symmetries in Rubik's cube. Next, we discuss different n-tuple representations for the cube, how we train the agents by reinforcement learning and how we improve the trained agents during evaluation by MCTS wrapping. We present results for agents that learn Rubik's cube from scratch, with and without MCTS wrapping, with and without symmetries and show that both, MCTS wrapping and symmetries, increase computational costs, but lead at the same time to much better results. We can solve the 2x2x2 cube completely, and the 3x3x3 cube in the majority of the cases for scrambled cubes up to p = 15 (QTM). We cannot yet reliably solve 3x3x3 cubes with more than 15 scrambling twists. Although our computational costs are higher with MCTS wrapping and with symmetries than without, they are still considerably lower than in the approaches of McAleer et al. (2018, 2019) and Agostinelli et al. (2019) who provide the best Rubik's cube learning agents so far.

Abstract (translated)

这项工作详细描述了如何在通用棋盘游戏(GBG)学习和玩耍框架中学习并解决卢比克魔方游戏(或谜题)。我们涵盖了2x2x2和3x3x3的魔方大小。我们详细描述了魔方的状态表示,如何通过扭曲、魔方整个旋转和颜色变换进行转换,并解释了卢比克魔方中的对称性使用。接下来,我们讨论了魔方的不同n-tuple表示方式,如何通过强化学习训练代理,并在评估期间使用MCTS包裹来提高训练代理的质量。我们提供了从零开始学习卢比克魔方代理的结果表明,无论使用MCTS包裹与否,代理的质量都显著提高了。我们能够在大多数情况下完全解决2x2x2的魔方和大部分情况下解决3x3x3的魔方,直到p=15(QTM)。我们目前还无法可靠地解决超过15个扭曲的3x3x3魔方。虽然使用MCTS包裹和对称性时我们的计算成本更高,但仍然比 McAleer(2018,2019)和Agostinelli(2019)等提供的最佳卢比克魔方学习代理的方法要高得多。

URL

https://arxiv.org/abs/2301.12167

PDF

https://arxiv.org/pdf/2301.12167.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot