Resilient Operation Planning for CubeSat Using Reinforcement Learning

##plugins.themes.bootstrap3.article.main##

##plugins.themes.bootstrap3.article.sidebar##

Published Sep 4, 2023
Shuntaro Kuroiwa Nozomu Kogiso

Abstract

This study proposes an autonomous operation procedure for a CubeSat by applying reinforcement learning based on resilient engineering. The CubeSat requires rapid judgment in every visible window based on a sufficient understanding of the health conditions of the satellite from limited telemetry data due to the limited communication performance and poor protection functions from the harsh environment. This study first performs a risk analysis by using System Theoretic Process Analysis (STPA) to evaluate the risk scenario of the Cube-Sat. In order to successfully operate the missions while avoiding the risk scenarios, reinforcement learning is applied to learn adequate behaviors according to the satellite situations such as the temperature and voltage of the installed battery, the sunlight and eclipse phase and the mission progress and plan. Through numerical examples, the validity of the proposed method is illustrated.

Abstract 177 | PDF Downloads 192

##plugins.themes.bootstrap3.article.details##

Keywords

CubeSat, Reinforcement Learning, Operation Planning, STPA, Risk Analysis

References
Aoki, T., Matsuyama, T., Miayata, K., Tsuruda, Y., & Yam- agata, M. (2021). Internal resistance of commercial lithium-ion battery evaluated by current-rest-method and its application to on-orbit charge/discharge simu- lation for microsatellites, j191-13. The JSME Annual Meeting 2021. (in Japanese)

Hashiwaki, K., Iida, K., Kogiso, N., Nambu, Y., Higuchi, K., & Katsumata, N. (2019). Efforts for safety review of cubesat “hirogari”. 56th JSASS Kansai-Chubu Autumn Conference, A03. (in Japanese)

Iida, K., Hashiwaki, K., Kogiso, N., Nambu, Y., Higuchi, K., & Katsumata, N. (2018). Development of cubesat ”hirogari”. 55th JSASS Kansai-Chubu Autumn Confer- ence, B03. (in Japanese) Leveson, N. (2012). Engineering a safer world. MIT Press.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A., Veness, J., Bellemare, M., . . . Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.

Nakase, H., Yamada, M., Maeda, Y., Nagasawa, K., Kano, T., & Kogiso, N. (2021). Lessons learned from the operation of the ultra-small satellite “hirogari”. The 65th Space Science and Technology Conference, 1I15. (in Japanese) Osaka Prefecture University. (2021). Report on nano-satellite “ hirogari ” operation. https://www.osakafu-u.ac.jp/ english-news/pr20211207e/. (Accessed on April 28, 2023)

Raffin, A., Hill, A., Gleave, A., Kanervisto, A., Ernestus, M., & Dormann, N. (2021). Stable-baselines3: Reliable re- inforcement learning implementations. Journal of Ma- chine Learning Research, 22(268). 1-8.

Yamada, M., et al. (2022). Resilient operation model of nano- satellite using stpa. Aerospace Technology Japan, 21, 31-39. (in Japanese)
Section
Special Session Papers