Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggested integration with VSCode #173

Open
udaya2899 opened this issue Feb 29, 2024 · 1 comment
Open

Suggested integration with VSCode #173

udaya2899 opened this issue Feb 29, 2024 · 1 comment

Comments

@udaya2899
Copy link

Hi there,

Thanks for the awesome work on this. This is super useful for us, and saves us a lot of time too. We just set this up today, and currently we ask the developer to just run:
bazel run @hedron_compile_commands//:refresh_all

It takes a few minutes to run the first time and we expect the developers to use a VSCode Devcontainer setup or GitHub Codespaces which we control with the devcontainer.json file. We install the clangd extension, set arguments there.

How do you suggest we setup the generation of compile_commands.json? On top of my mind here are some ideas:

  1. Have a shared compile_commands.json that we store in cloud, and download upon the Devcontainer/Codespace startup. We can generate the compile_commands.json when PRs are merged and upload it to GCS. Will this approach work? In a similar reproducible devcontainer setup, is a compile_commands.json deterministic?

  2. What about the live changes the developers make, like adding a new dependency, or a new library while developing? Should the developer run //:refresh_all again which is costly and slow to run? How can we make it seamless for the developer?

Can you help us with how you've set it up so there is no manual run by the developer?

@cpsauer
Copy link
Contributor

cpsauer commented Mar 1, 2024

Hi, Udaya! Thanks for working through initial setup. So glad to hear it's useful to you guys :)

We haven't been sharing the generation of compile_commands.json, nor have others that I know of, so I'm afraid I don't have a proven path for you, but here's what comes to mind:

  1. I think the first things to try are the performance tricks in the readme. You may well be willing to, e.g., ditch entries for external sources/headers and that can really speed things up.
  2. Reruns should be much faster than the initial run, with pretty much everything cached that could be. Also, if there's already been a build, then we can hit lots of its internal caches for header finding (the slow part), and even the initial run can be super fast. If you have bazel remote caching set up, it might be easier and better to just automatically (asynchronously) kick off a build that pulls in the remote .d files and generated sources and then runs this tool over them. This path would also make incremental updates fast, which wouldn't be true if you just grafted in a compile_commands.json file. I think this has solved most of the live, development changes problems for other folks, with most of the tricky additional parts being having the tool gracefully handle code that's currently being edited and in a state that doesn't compile.
  3. I think syncing compile_commands.json directly might pose some additional challenges: The compile_commands.json has the project's absolute directory (which might vary according to the user's home directory) baked into it. Perhaps the container avoids this, but if not, you could try replacing the "directory" entries with something relative (i.e. .) and seeing if things still work. I'd happily take that change. The generation should be deterministic enough, but if generated files are missing it may not find their headers, for example (issuing a warning, but making its best efforts to provide you a useful compile_command.json without them--often people want to run this tool when their code is in a state that doesn't compile). That is, you might want to have run a build first anyway. For synced-git-coupled storage, one additional option would be to put it in git large file storage (LFS), similar to the GCS approach you mentioned.
  4. For avoiding manual runs: I know some people have played around with this extension, but it has some limitations around new files (see the extension's description). I think most people just run manually when needed.
  5. If clangd indexing itself gets slow, you might want to use their remote indexing feature--I haven't used it, but I'm guessing it might be adopted in other Google projects. Perhaps that feature was even added for them.

Sorry to not have a well blazed path for you here, but I'm hoping that's enough to get things started! Please do let me know how it goes so we can keep making this better for people.

Cheers! Happy coding!
Chris

P.S. I saw you work at Intrinsic, and smiled! I'd enjoyed reading about them a while back and have a robotics and manufacturing interest, too :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants