Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect data #29

Open
a1wj1 opened this issue Apr 16, 2024 · 13 comments
Open

Collect data #29

a1wj1 opened this issue Apr 16, 2024 · 13 comments

Comments

@a1wj1
Copy link

a1wj1 commented Apr 16, 2024

Can the software collect visual image data for reinforcement learning?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 16, 2024

Thank you for your interest in LimSim&LimSim++. Currently, LimSim++ has saved the panoramic image data in the database. You can find the relevant data in the imageINFO table in the database. However, this image is a compressed image with a size of 560*315. If you need the original image of 1600*900, you can obtain the CameraImages.ORI_CAM_FRONT series during the running process and save it yourself.

@a1wj1
Copy link
Author

a1wj1 commented Apr 16, 2024

Glad to hear from you! I am using LimSim++ of LLM. If GPT4 is used for description, is the input of GPT4 a description of the BEV image scene? Also, how do I obtain the CameraImages.ORI_CAM_FRONT series?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 17, 2024

Yes, if your LLM does not support image input, you can use the text description we provide. In lines 249 to 253 of ExampleVLMAgentCloseLoop.py, you can see the method of obtaining and using CameraImages. You can replace images[-1].CAM_FRONT with images[-1].ORI_CAM_FRONT. You can check the function model.getCARLAImage() to get more information.

@a1wj1
Copy link
Author

a1wj1 commented Apr 17, 2024

Thank you. My LLM just can't read the image, how do you get the text description on your code? I saw the introduction to the paper which said:
LimSim++ extracts road network and vehicle information around your vehicle. This scenario description and task description information is then packaged and passed in natural language to the driver agent.

But in terms of code, how do you get this process?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 17, 2024

In lines 314 to 316 of ExampleLLMAgentCloseLoop.py, you can see how we get navigation information, action information and environment information.

navInfo = descriptor.getNavigationInfo(roadgraph, vehicles)
actionInfo = descriptor.getAvailableActionsInfo(roadgraph, vehicles)
envInfo = descriptor.getEnvPrompt(roadgraph, vehicles)

In fact, you can build your own driver agent by modifying ExampleLLMAgentCloseLoop.py directly on top of it.

@a1wj1
Copy link
Author

a1wj1 commented Apr 17, 2024

4a02413b5b94b66d20cefc1ab1ad895
Thanks for your reply. Excuse me. How do I in ExampleLLMAgentCloseLoop images from three perspectives py reality inside?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 17, 2024

ExampleLLMAgentCloseLoop.py will not provide round-view images, you can get camera images from ExampleVLMAgentCloseLoop.py

@a1wj1
Copy link
Author

a1wj1 commented Apr 17, 2024

Should it be possible to transfer the image display code from ExampleVLMAgentCloseLoop.py to ExampleLLMAgentCloseLoop.py?

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 18, 2024

In fact, there is no big difference between the two in terms of interface calls, you can take the interfaces in VLMExample and use them in LLMExample to get the image information. However, VLMExample's runtime conditions are different, you can refer to readme.md to run VLMExample.

@a1wj1
Copy link
Author

a1wj1 commented Apr 18, 2024

When I was running ExampleLLMAgentCloseLoop.py, I had already set the link for carla and opened carla, but no image was displayed either. I compared ExampleLLMAgentCloseLoop.py and ExampleVLMAgentCloseLoop.py and felt that apart from the LLM interface, the other parts were not very different, but I could not find the key code to display the image.

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 18, 2024

Did you set CARLACosim=True when you initialize the model?

# init simulation
model = Model(
    egoID=ego_id, netFile=sumo_net_file, rouFile=sumo_rou_file,
    cfgFile=sumo_cfg_file, dataBase=database, SUMOGUI=sumo_gui,
    CARLACosim=True, carla_host=carla_host, carla_port=carla_port
)

However, I still recommend that you use VLMExample if you want to work with image data.

@a1wj1
Copy link
Author

a1wj1 commented Apr 18, 2024

Yes,I have CARLACosim=True

@Fdarco
Copy link
Collaborator

Fdarco commented Apr 18, 2024

So, can you run the VLMExample successfully? You can just run VLMExample to test that your environment is installed correctly and that the application is running properly, without using VLM making a decision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants