Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Learning Error #8

Open
sadlion opened this issue Mar 29, 2020 · 4 comments
Open

Python Learning Error #8

sadlion opened this issue Mar 29, 2020 · 4 comments

Comments

@sadlion
Copy link

sadlion commented Mar 29, 2020

책 2장 ML-Agents 3DBall 예제를 돌리는데, Unity 실행되지만 아무런 반응없이(공은 굴러가고) , python에서 학습시 연결오류가 발생합니다.
[환경]
Windows, Unity 2019.1.14f1, python 3.6, mlagents 0.8.1,mlagents-envs 0.8.1
[오류]
env = UnityEnvironment(file_name=env_name)
==>
UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Academy's Broadcast Hub is configured correctly
The Agents are linked to the appropriate Brains
The environment and the Python interface have compatible versions.

image

@Kyushik
Copy link
Contributor

Kyushik commented Mar 30, 2020

UnityTimeOutException: The Unity environment took too long to respond.
위 오류의 경우 다양한 이유로 발생할 수 있기 때문에 이유를 찾기가 조금 어렵긴 합니다.
혹시 올려주신 내용이 오류 코드 전체 내용일까요? 혹시 더 내용이 있으면 첨부해주시면 좋을 것 같습니다.
일단 다음의 사항들을 확인해보시면 좋을 것 같습니다.

  • Python에 대한 ML-Agents 설치는 0.8.1로 맞게 잘 하신 것 같습니다. 환경 구성을 위해 깃허브에서 받으신 유니티 ML-Agents도 0.8.1로 받으셨나요?
  • 아카데미 설정은 정확하게 하신 것 같습니다. 에이전트들도 Learning brain으로 잘 설정이 되었나요?
  • Project setting의 설정도 이유가 될 수 있는데 잘 설정해주신 것 같습니다.
  • 아카데미에서 브레인 설정 옆의 control 체크를 해제하신 후 에이전트들의 브레인을 모두 Player brain으로 변경하시고 유니티 내의 실행버튼을 통해 console 창을 확인해보셔서 게임이 오류 없이 실행되는지 한번 확인해보시면 좋을 것 같습니다.

@sadlion
Copy link
Author

sadlion commented Apr 1, 2020

Virtualenv로 환경을 꾸며서 해 보니 안되는 것 같고, Anaconda 의 conda virtualenv로 환경을 꾸며서 해보니 control 클릭시에도 agent에 의해 학습은 되는 것 같습니다. 다만, 저장할 때 convert 오류가 발생되네요. 이 부분도 체크해 보고 업데이트할 수 있으면 해 보겠습니다. (agent 버젼을 0.10.1 버젼으로 GridWorld로 모델 변경해서 테스트, 0.8.1버젼은 테스트전.)

Windows, Unity 2018.1 Anaconda python 3.7, mlagents 0.10.1,mlagents-envs 0.10.1

Reward: 0.987. Training.
INFO:mlagents.trainers: first-grid: GridWorldLearning: Step: 14000. Time Elapsed: 130.803 s Mean Reward: -0.261. Std of Reward: 0.937. Training.
...
Converting ./models/first-grid-0/GridWorldLearning/frozen_graph_def.pb to ./models/first-grid-0/GridWorldLearning.nn
File "C:\anaconda3\envs\ml-agents-10\Scripts\mlagents-learn-script.py", line 11, in
load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\learn.py", line 417, in main
run_training(0, run_seed, options, Queue())
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\learn.py", line 255, in run_training
tc.start_learning(env)
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\trainer_controller.py", line 219, in start_learning
self._export_graph()
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\trainer_controller.py", line 129, in _export_graph
self.trainers[brain_name].export_model()
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\trainer.py", line 152, in export_model
self.policy.export_model()
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\tf_policy.py", line 230, in export_model
tf2bc.convert(frozen_graph_def_path, self.model_path + ".nn")
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1552, in convert
i_model, args
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1397, in process_model
process_layer(node, o_context, args)
File "c:\project\unit\ml-agents-0.10.1\ml-agents\mlagents\trainers\tensorflow_to_barracuda.py", line 1220, in process_layer
assert all_elements_equal(input_ranks)

@sadlion
Copy link
Author

sadlion commented Apr 4, 2020

Test Environment

  • Unity 2019.1.14f1
  • Python : 3.6 (anaconda)
  • Example : 3DBall
  • ml-agents-0.8.1

control 선택하지 않고 런을 실행하면 잘 동작하지만, control을 선택하고 빌드후 train시키거나, mlagents-learn 실행후 런하게 되면 다음과 같은 에러가 발생합니다.
brain code는 #C 이고 train code는 python인데, 상관없나요?

-- Unity console
UnityAgentsException: The Communicator was unable to connect. Please make sure the External process is ready to accept communication with Unity.
MLAgents.Batcher.SendAcademyParameters (MLAgents.CommunicatorObjects.UnityRLInitializationOutput academyParameters) (at Assets/ML-Agents/Scripts/Batcher.cs:91)
MLAgents.Academy.InitializeEnvironment () (at Assets/ML-Agents/Scripts/Academy.cs:345)
MLAgents.Academy.Awake () (at Assets/ML-Agents/Scripts/Academy.cs:250)

-- Jupyter 상에서 python code
c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents\mlagents\trainers\learn.py:141: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
trainer_config = yaml.load(data_file)
INFO:mlagents.envs:Start training by pressing the Play button in the Unity Editor.
Process Process-1:
Traceback (most recent call last):
File "C:\Anaconda3\envs\ml-agents-8\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Anaconda3\envs\ml-agents-8\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\subprocess_environment.py", line 53, in worker
env = env_factory(worker_id)
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents\mlagents\trainers\learn.py", line 192, in create_unity_environment
base_port=start_port
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\environment.py", line 76, in init
aca_params = self.send_academy_parameters(rl_init_parameters_in)
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\environment.py", line 538, in send_academy_parameters
return self.communicator.initialize(inputs).rl_initialization_output
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\rpc_communicator.py", line 80, in initialize
"The Unity environment took too long to respond. Make sure that :\n"
mlagents.envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that :
The environment does not need user interaction to launch
The Academy's Broadcast Hub is configured correctly
The Agents are linked to the appropriate Brains
The environment and the Python interface have compatible versions.
Traceback (most recent call last):
File "C:\Anaconda3\envs\ml-agents-8\lib\multiprocessing\connection.py", line 312, in _recv_bytes
nread, err = ov.GetOverlappedResult(True)
BrokenPipeError: [WinError 109] The pipe has been ended

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\subprocess_environment.py", line 38, in recv
response: EnvironmentResponse = self.conn.recv()
File "C:\Anaconda3\envs\ml-agents-8\lib\multiprocessing\connection.py", line 250, in recv
buf = self._recv_bytes()
File "C:\Anaconda3\envs\ml-agents-8\lib\multiprocessing\connection.py", line 321, in _recv_bytes
raise EOFError
EOFError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Anaconda3\envs\ml-agents-8\Scripts\mlagents-learn-script.py", line 11, in
load_entry_point('mlagents', 'console_scripts', 'mlagents-learn')()
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents\mlagents\trainers\learn.py", line 262, in main
run_training(0, run_seed, options, Queue())
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents\mlagents\trainers\learn.py", line 88, in run_training
keep_checkpoints, lesson, env.external_brains,
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\subprocess_environment.py", line 173, in external_brains
return self.envs[0].recv().payload
File "c:\d_drive\30.project\dev_folder\dev_python36\rl\unity_ml_agents-master\ml-agents-0.8.1\ml-agents-envs\mlagents\envs\subprocess_environment.py", line 41, in recv
raise KeyboardInterrupt
KeyboardInterrupt

@Kyushik
Copy link
Contributor

Kyushik commented Apr 4, 2020

원래 control에 체크하고 환경을 빌드한 후에 학습을 시키는 것이 맞는 과정입니다! 유니티 코드는 C#이고 학습 코드는 python이지만 이 코드들이 서로 통신을 할 수 있게 도와주는 것이 ML-Agents의 역할이라 잘 작동을 해야합니다. 일단 Unity console 상에서 에러가 발생하면 환경은 열리지만 진행되지 않고 멈춰있는 상태가 됩니다.
유니티에서 발생하는 에러에 대한 확인은 developer build로 확인해보신건가요? 통신을 하기 위한 설정이 뭔가 잘못된 것 같은데 에이전트들에도 learning brain으로 잘 할당이 되어있는지 확인해보면 좋을 것 같습니다. 아니면 현재 테스트해보고 계신 유니티 환경 코드를 제 메일로 보내주시면 한번 직접 확인해보겠습니다. kyushikmin@gmail.com 입니다!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants