Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WSL2 support for in Cortex-Debug. Discussion and Strategy #467

Open
haneefdm opened this issue Aug 16, 2021 · 69 comments
Open

WSL2 support for in Cortex-Debug. Discussion and Strategy #467

haneefdm opened this issue Aug 16, 2021 · 69 comments

Comments

@haneefdm
Copy link
Collaborator

WSL2 is next on my list. I am trying to set up a Windows machine (my previous WSL2 seems like it got corrupted). I would like people to subscribe to this Issue and comment and help test it. There are several issues and workarounds already related to this and I would like to consolidate the discussion here.

First, comments are welcome on how it should work.

@haneefdm haneefdm pinned this issue Aug 16, 2021
@haneefdm
Copy link
Collaborator Author

haneefdm commented Aug 16, 2021

References: Issues #451, #402, #361, #66 and PR #328. There may be more.

@s13n
Copy link

s13n commented Aug 19, 2021

Just my $0.02:

I would want to be able to work in either of those two setups:

  1. Building takes place in the WSL2 subsystem directly. Here, the cross tools (arm-none-eabi-gdb etc.) are installed in the WSL2 subsystem. Since there doesn't seem to be any USB passthrough into WSL2, the debugger, which runs inside the WSL2, needs to establish contact with the probe that is connected to the Windows host using networking.

  2. Building takes place in a docker container running within the WSL2 subsystem. You would typically be running Docker Desktop, which on Windows offers support for WSL2. In a nutshell, a Linux container with the cross tools runs within the WSL2 subsystem, and VScode runs on Windows in Dev Container mode. Again, since gdb runs inside the container, the network must be used to contact the debug probe.

The second approach may seem more complex, but it offers the advantage that the tool environment for a project can be shrink-wrapped in a ready-made container that is described concisely with a Dockerfile that can be maintained and versioned in its own git repository. Multiple such containers can be supported in parallel for different projects, without them getting in each other's way.

The problem of how GDB talks to the probe is a multifaceted one, that depends a lot on the actual probe. It will have to be a network connection of some sort. Some example scenarios are:

  1. You have a probe with a network port, for example a SEGGER JLink PRO. In this case, there's probably no difference in the communication setup compared to the non-WSL2 case. With SEGGER, you install the Linux version of the SEGGER JLink software in the Dev Container (second case above) or WSL2 (first case above). No special software is needed on the Windows host.

  2. You have a SEGGER JLink connected to the Windows host via USB. In this case you can use the JLink remote server included in the SEGGER JLink software installation. Thus, you have to install the Windows version on your host. You only need to run the remote server on the host. Within WSL2 or within the Dev Container, you need the Linux version of the SEGGER software. The JLink GDB server is run in WSL2 or the Dev Container, respectively, and it is configured to connect to the remote server via IP. See the SEGGER docs for that. This scenario can also be used with a SEGGER probe connected to a different computer on the network. If your 3rd party probe can be reflashed to act as a JLink probe, which includes numerous probes integrated on an evaluation board, this should work, too.

  3. You run the GDB server on the Windows host, and have GDB inside WSL2 or the Dev Container contact this GDB server using IP. You might have to run the GDB server manually, unless a way can be found to start it from within the Dev Container. This should work even without a JLink based probe. In WSL2 or the Dev Container, no special probe-related software should be needed.

The first step would be to offer information on how to configure those scenarios properly, i.e. to describe what should work or what shouldn't. In this step it would be acceptable having to start some server manually, instead of having everything happening automatically when the debug button is pressed. It would also be acceptable to enter IP addresses or other settings manually depending on the local setup.

The second step would be to automate this stuff as much as possible.

Does this help you, @haneefdm ?

@haneefdm
Copy link
Collaborator Author

Does this help you, @haneefdm ?

@s13n Oh, helps a LOT, and thank you for such detail. And, a lot to think about over the weekend.

Would you agree with the following?

  1. Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with gdb source-paths.
  2. Hard constraint: the gdb-server should run wherever the HW/probe is. This, to accommodate all (most) gdb-servers. J-Link PRO's remote-server is an exception but in truth, the real probe SW is running where the probe is right?

@s13n
Copy link

s13n commented Aug 19, 2021

I would be able to live with constraint 1, but perhaps others will disagree.

I think constraint 2 is a bit too restrictive. I would say GDB server should run either where the probe is, or where GDB is. Supporting a third location would be overkill. This also supports the JLink remote server. One instance where it matters is when you use a JLink with network interface, for example the JLink Pro. You want to run the JLink GDB server where the GDB is, otherwise you end up installing the SEGGER software on both the host and in the dev container.

Of course, with a USB connected probe, you have to install the respective software on the host.

@haneefdm
Copy link
Collaborator Author

I added Constraint 2 because of the physical interface (USB) the HW connects to. This is more of a constraint to me as I have to find a solution that works in that way. And, I am not saying supporting a 3rd location at all -- sorry, if my words implied that. To summarize, the gdb-server and the HW under debug are always on the same machine, attached at the hip, so to say.

It is also a generalization, beyond WIndows+WSL2.

@s13n
Copy link

s13n commented Aug 20, 2021

Well, USB isn't the only physical interface that is supported by probes, it is only the most common one (by far). Why would you restrict yourself like that? What do you gain?

@haneefdm
Copy link
Collaborator Author

I am not restricting myself to it. It is a reality that I am stating. Once the server is not local, then it is remote. There is no in-between and I don't see WSL as an in-between thing. Maybe this is where I am wrong. I really don't care how the gdb-server connects to the HW.

We got the following scenarios

  1. gdb-server runs where VSCode/gdb runs. Great, we already do that
  2. gdb-server runs on another machine. Okay, not great. TCP port selection, launching of the server is not automatic and until recently, SWO was not supported. This works 100% but, people are not happy with this.
  3. gdb-server cannot run locally in some cases which is where the WSL situation comes in but it degenerates to scenario 2 above.

It is 2/3 that we are trying to address to make it look almost like # 1. Note that we NEVER talk to the gdb-server which is why we don't care where it lives. 90+% of the time TCP ports are used and sometimes serial ports. But someone has to launch it. We never had to worry if the gdb-server is using USB or some other communication mechanism.

The reason GDB is run where the compiling is done is because of pathnames embedded in the elf file. If these are not right, then breakpoints don't work, stack traces will not have references back to source code, etc.

One thing we did not talk about is where is VSCode running? In my head, gdb and VSCode are running on the same machine. One reason is that all communication with GDB happens over stdio, while gdb itself may talk to the gdb-server over some connection (local or remote). Another is that this is the model GDB has chosen and has worked for over 3 decades.

Btw, I the MB failed on the PC where I used to have WSL installed. It was my only PC. I ordered a PC and waiting. I normally do my testing in a VM but here that gets convoluted. Mac running Win in VM which in turn hosting WSL2. So, no experiments until next week.

@s13n
Copy link

s13n commented Aug 20, 2021

Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.

We are talking about a scenario where VS Code runs on Windows in remote mode. The remote OS is either the WSL2 subsystem directly, or a docker container running within WSL2. The VS Code Server runs there, the source code resides there, and the cross tools including GDB run there.

The WSL2 case has the feature that you can launch Windows software from within the WSL2 subsystem. AFAIK you can't do this from within a docker container, but maybe the fact that VS Code runs on Windows provides a way to start some process there, even though the actual debugging takes place in the remote OS under control of the VS Code Server. I have no idea if VS Code helps you with that.

I am relatively new with the dev container way of working with VS Code, but I already prefer working in this way, due to the simplicity of maintaining a build environment that is specific to a project. You can also run the same container on a remote machine, for example your build server, rather than locally within WSL2, and you shouldn't notice much of a difference.

@haneefdm
Copy link
Collaborator Author

Have a look at https://code.visualstudio.com/docs/remote/remote-overview for some overview of how VS Code is used in remote mode, which includes WSL2.

Okay, that is a very different model. I was aware of that. I will look into it while I wait, but if you see the repo for it, it has 802 issues, no commits since 5/11/2021. It is a package containing 3 extensions. Last commit was 4 months ago in those repos.

Some of it is the inverse model of what I was thinking.

One thing that is important to me (selfishly) is how to debug this extension itself. Without that it would be horrible.

I worked a bit on the MS C++ debug adapter and it was very difficult to do any cross-platform debugging of the extension. It was like a one-man circus show to juggle multiple VSCodes running on different machines. Both VSCode and Visual Studio were needed. I can't even explain.

@jdswensen
Copy link

@haneefdm, I think @s13n is leading you down the correct path here. I'm not familiar enough with VS Code's internal workings to tell you what is technically possible to solve the problem, but I can offer my use case and viewpoint.

I'm currently working on setting up a development environment at work for using Zephyr. Our build pipeline will be all Linux based tools, but our corporate issued computers are all Windows machines. The setup differences for Zephyr between Windows and Unix based systems is painful. Ideally, I could just define a Docker image that has all the necessary build and dev tools in it and use it everywhere instead of depending on other devs to properly set up a bunch of prerequisite software. If I need to have them install a couple things like probe drivers, I can live with that. It's better than maintaining documentation on installing an entire dev environment and manually setting path variables in a locked down Windows machine.

Something to consider is that even though WSL2 and Windows are running on the same physical machine, for the purpose of this issue they are effectively two different systems. WSL2 is a lightweight VM and if it properly supported USB passthrough the technical implementation might not be so complex. However, WSL2 does not currently support USB passthrough.

Microsofts docs on Remote Development and Codespaces might help explain remote workspaces better.

@wbober
Copy link

wbober commented Aug 27, 2021

Good input here!

Indeed, the configuration I'm interested in is:

  • the host is a Windows machine
  • the whole build system including compiler, source code, and gdb is in the WSL2 container. As @jdswensen we have a Zephyr based build system.
  • the VSC runs in the remote mode, i.e., the VSC UI is on the host but all calls that involve system operations are done on the container. What is nice about the mode is that you shouldn't need any changes to the extension code. We're developing and extension and all functions work fine in the remote mode.
  • the SEGGER tools are installed on the host;

As @haneefdm mentioned you can debug from WSL2 even today but the limitations are:

  • you need to start the JLink GDB server manually
  • you need to figure out the IP number of the host manually

This could be easily solved since, as @s13n mentions, you can start any windows process from within WSL2. From what I have checked this is a modification in the WSL2 kernel - when you try to execute a binary with *.exe the relevant syscall is captured and passed to the host. This mean that we can launch the gdb server on the host and configure the gdb to connect to the correct remote (host) port.

The whole thing is essentially a workaround on the lack of USB passthrough that won't come soon, if ever. Since this can't be done with Docker perhaps we should tackle these issues separately?

@haneefdm
Copy link
Collaborator Author

haneefdm commented Aug 27, 2021

Thank you all. My new PC is finally here. Setting it up right now and then I will be able to try stuff out myself on what works well.

I am sure I can figure it out but do you know, how the client (docker or WSL) env can know what the host IP is? I was told to look in /etc/resolve.conf but that doesn't look right at least for WSL2. Especially, if the client is in bridged mode or the host is using VPN

@s13n
Copy link

s13n commented Aug 28, 2021

When using Docker Desktop for Windows, you can have the host's IP address resolved from within the container by using host.docker.internal as the DNS name.

See https://docs.docker.com/desktop/windows/networking/

@haneefdm
Copy link
Collaborator Author

Oh, that is super nice with Docker. Thanks @s13n

@wbober
Copy link

wbober commented Aug 30, 2021

@haneefdm I think looking into /etc/resolv.conf is the correct way. It does work for me:

image

@haneefdm
Copy link
Collaborator Author

@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the /etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.

@wbober
Copy link

wbober commented Aug 31, 2021

@wobe, Thanks. Does that work if the host Windows is using VPN. Don't you see many entries in the /etc/resolve.conf. The solution I am thinking may not need to know the host-ip at all. VSCode may help in this regard.

Don't see much of a difference, i.e., I have the same contents in the resolv.conf file.

@GitMensch
Copy link
Contributor

GitMensch commented Sep 17, 2021

1. Hard constraint: gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment. Or else path-names in the ELF file will be messed up. Messy, but can be corrected with [gdb source-paths](https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html).

I "vote" against that constraint. Just allow setting source-path directly in the launch configuration and everything is fine.

@haneefdm
Copy link
Collaborator Author

Technically it is not a hard constraint for me. Of course, you can use source paths just like you can today. It has to do with the client-server VSCode architecture. This extension and the GDB are attached at the hip. That is the true hard constraint for me.

image

https://code.visualstudio.com/docs/remote/remote-overview.

As things stand, in one incarnation, Cortex-Debug would be classified in the "Remote OS" box (WSL, Docker, etc.) while the GUI itself would be in the "Local OS" box. That picture is not totally applicable to what we are doing, btw. Especially for where the 'Application is running'. You can also see where the Source code box is.

That little green box on the bottom right of the picture above is GDB.

I don't even know if that arch. is feasible for me but it is a start as a lot of groundwork has already been laid out.

@GitMensch
Copy link
Contributor

That only applies if your use VS Code Server - and as those binaries are non-free I don't use that. That's likely the reason why I commonly think of "Local OS with source and GDB", attached to "GDB Server with the process". This "simple" scenario also works quite fine since years for most setups.
Note: when running on Windows MSYS2 provides a gdb-multiarch.exe - so the "Debugger" part is solved (objdump not).

@haneefdm
Copy link
Collaborator Author

and as those binaries are non-free I don't use that.

Which binaries? The VSCode server(s)? And free as in Open Source or some other meaning (costs)?

In VSCode's mind 'Local OS' means the host running the UI. To me, 'Local OS' also meant "Local OS with source and GDB" but I had to teach myself a different way of thinking. VSCode is my host so, playing by the host's rules and thus terminology. I have to back and edit all my comments to make sure.

@GitMensch
Copy link
Contributor

Yes, the vscode servers. It is all about freedom, not price. This actually leads to vscode server not running everywhere I'd like it to for remote debugging (I'm not sure if it actually works in every distro one can install in wsl).
... actually the client part in vscode is also closed source and the "gratis" extensions needed to work for that: are only licensed to be used on "Visual Studio Code binaries" (= the one provided by MS where you are even not allowed to distribute own copies.... - and even from a practical view: those binaries are not available for all GNU/Linux distros where vscode actually runs - which is the reason that I only use binaries that are as free as vscode [the main source], nowadays mainly VSCodium).

@GitMensch
Copy link
Contributor

Note: in VSCode's mind "Local OS" is also something that run's vscode in a browser: it is actually the UI.

@TorBlackbeard
Copy link
Contributor

TorBlackbeard commented Oct 8, 2021

My team and I have been using the mentioned setup* for ½ year now, and it's great.
(* vscode running in windows + 'remote' plugin. compiler + jlinkgdbserver running on linux side)
We use a Docker-setup, that uses the excact docker-container, that the buildserver uses (test what you fly, fly what you test)
We also use a Ubuntu-based WSL2 setup, with compilers manually installed.
(This is mainly a question about ergonomics about mapped drives, network firewalling by McAfee, etc )

I'm on Segger JLink tools, so I can run the segger-jlink-gdb-server in linux, and I specify an ip (on my windows host) where I have a Jlink-remote-server running. This is NOT the same as running the gdb-server on windows.

        "type": "cortex-debug",
        "servertype": "jlink",
        "serverpath": "JLinkGDBServerCLExe", <--- this *is* the linux executable
        "ipAddress": "172.20.15.135", <-- connects to "jlink-remote-server" on this machine

Annoyances:
I have to write the IP directly. I cant write "host.docker.internal". Tried some ways to get it indirectly via something like "dig +short host.docker.internal" but so far no luck. ${env:HOST_IP} works, and I dont mind doing
export HOST_IP=$(dig +short host.docker.internal) in a terminal when my IP changes on the windows side, but that environment does not affect the VSCode environment, so it does not work. Any ideas greatly appreciated.

This is really minor:
On Ubuntu-based WSL2 (the easiest to install , because it's in official Windows Store), the gdb debugger is called arm-none-eabi-gdb as expected by cortex-debug.
On fedora (company default for docker-containers for some reason) one installs gdb-multiarch, and the command is just 'gdb' so I need a "gdbPath": "gdb". I hope fedora and debian converges at some point, so I can get rid of this difference. (Some team-members makes a symlink for arm-none-eabi-gdb, others live with a dirty file in git.)

/T

@haneefdm
Copy link
Collaborator Author

@andyinno Can you try the tools from the command line and use gdb from the command line as well. You see the exact command-line options used in the Debug Console.

Btw, I have to remove your comment as this thread is not for issue submissions or asking for help. Please re-open a new issue and someone might come along and help you. Once you submit a new issue, I will remove your comment from here.

If you want to tell us how to implement the remote/WSL debug then this is the right place. You are doing something this tool was not designed for -- if it works, great.

@lagerholm
Copy link

Maybe this:
https://www.xda-developers.com/wsl-connect-usb-devices-windows-11/
https://www.elevenforum.com/t/connecting-usb-devices-to-wsl.2514/
will change some of the scope.
Not tested yet so I haven't verified that it actually works to have the probes connected directly to WSL2 with USB.

@haneefdm
Copy link
Collaborator Author

@lagerholm Thank you so much for the info. This is great.

@DanieleNardi
Copy link

DanieleNardi commented Nov 15, 2021

Hello,
I tried to use Cortex-Debug in WSL2 using the usbipd-win tool to connect host-connected J-Link to WSL2 environment. J-Link connection looks fine (at first look, it seems it's required Jlink drivers to be installed on both host and wsl2 sides and aligned to the same version), but extension doesn't stop at main.
launcher.json looks like the following:

"name": "DVC TopRow Emerald Inventory",
"type": "cortex-debug",
"request": "launch",
"cwd": "${workspaceFolder}",
"armToolchainPath": "/opt/gcc-arm-none-eabi/bin",
"executable": "path/to/elf",
"serverpath": "/opt/SEGGER/JLink/JLinkGDBServerCLExe",
"servertype": "jlink",
"device": "MK10DX256xxx7",
"interface": "jtag",
"serialNumber": "proper serial number",
"runToMain": true,
"stopAtEntry": true,
"svdFile": "path/to/svd"

@haneefdm
Copy link
Collaborator Author

haneefdm commented Mar 9, 2022

People are using their own gdb-init scripts to add source maps. No help from extension needed. The bigger problem is how breakpoints are set and how stack traces are interpreted. Disassembly is affected as well.. How we look for static variables is also affected -- as in we have to implement what gdb does. VSCode manages breakpoints with paths (file/line) that it sees. It wasn't clear to me how gdb handled file/line breakpoints and if clients (us) have to do the reverse mapping. Yes, a separate issue would be good but also with details of what it all means. In fact, it is more useful in a non WSL setting.

cpptools/cppdbg added support for source maps. We can check to see what all they do with that info.

@AaronFontaine-DojoFive
Copy link

AaronFontaine-DojoFive commented Oct 3, 2022

I've recently been trying to solve the same for a client project. I read this entire thread and did some experimentation and here's what I came up with.

In my .bashrc file in WSL, I added the following line:

export HOST_IP=`ip route|awk '/^default/{print $3}'`

In the launch.json file for VS Code, I added the following to serverArgs:

"-select", "ip=${env:HOST_IP}"

Then, I just need to make sure the JLink Remote Server is started in Windows and running in LAN mode before launching a debug session. This could probably also be triggered from launch.json. I need to look more into that. The JLink Remote Server does not let me select the network interface when running in LAN mode and for some reason always picks a VMWare network interface (192.168.253.1). Somehow this works, even though the JLink GDB Server running in Ubuntu is connecting to 172.28.208.1, which is nice because I don't have to manually copy any IP addresses. It implies the whole thing may be automatable.

Note, that HOST_IP here refers to the Windows host (what VS Code calls the "Local OS" when running a remote connection).

@AaronFontaine-DojoFive
Copy link

As far as hard constraint 1 above

gdb should run wherever the source is compiled and ideally, this would be in WSL2/Linux environment

I haven't seen any discussion yet about -fdebug-prefix-map. I have had good success with this moving ELF files between separate build and debug environments. This may not be a good all around solution though as it is gcc-specific.

@lzptr
Copy link

lzptr commented Oct 3, 2022

@aaronf-at-d5
I also tried to use it through bot the Remote Server and USB-IP approach. You can add a prelaunch task to your launch.json configuration to start the Jlink server.
Here's my example on it:

@sullivanmj
Copy link

It may be possible to attach USB-based debuggers to WSL via cortex-debug, that does not require any of the following:

  1. Use of USB-IP
  2. Pre-launch/post-launch tasks to start remote GDB server
  3. Running of other remote debugging assistant applications such as J-Link Remote Server

My thought is to leverage the fact that Windows executables, when invoked from WSL, execute within the context of Windows.

This means that we can invoke a GDB server on the Windows host using the cortex-debug serverPath and perhaps serverArgs (to allow remote connections, etc) debug attributes.

Then the GDB client could be invoked from within WSL and attached per usual, but with some of the tricks listed above to use the correct target IP address and port - either using something like host.docker.internal:50000 or hostname.mshome.net:50000, or something like ${env:HOST_IP}:50000.

Debugging would commence as usual, and any other clients of the GDB server that you wish to attach from Windows could also operate normally. Then at the end of the debug session, the GDB connection would be cleanly closed as usual from the GDB client.

To achieve all of this, I was hoping to find a way to use both the serverPath and gdbTarget debug attributes, but it seems that gdbTarget is only used when the servertype is external, in which case, the serverPath is not used. This is an understandable behavior given the context of normal GDB client/server connections, but it prevents the approach I'm describing from being used.

I'm thinking that the simplest way to work around that limitation, would be the creation of an overrideInitCommands debug attribute. This attribute would allow manual specification of how the connection from the GDB client to the server should be initialized from within launch.json. I have a branch that attempts this functionality located here. Presumably, this approach could be leveraged in a variety of other use-cases, too. It would be used in the launch.json like so:

"overrideInitCommands":"target remote ${env:HOST_IP}:50000"

Am I off my rocker? Is there a better way of achieving what I've described above with the existing functionality of cortex-debug, or should I open a PR?

@AaronFontaine-DojoFive
Copy link

AaronFontaine-DojoFive commented Jun 8, 2023

@sullivanmj, I like where you're going with this line of thinking. The reason using preLaunchTask to forward to tasks.json works is because a lot of the Windows-native paths are automatically mapped into WSL with the path renaming necessary to point into /mnt/c/. This allows us to just invoke JLinkGDBServerCL.exe and let Windows hooks take care of it. The process is clunky though and fails more easily. It would be nice to tell the Cortex Debug extension directly in launch.json, that we're using the Windows executable. You would still have to provide the -nolocalhostonly in serverArgs and use the ${env:HOST_IP}:2331 trick.

@sullivanmj
Copy link

@aaronf-at-d5 Good point about the path being mapped into WSL. I agree, it would be great to have this way of doing it natively supported by Cortex Debug. I now that I wrote that post, it appears that I didn't actually discover this first, @askariz appears to have done this with openocd. But it looks like they were invoking that separately from the launch.json.

@mpekurin
Copy link

mpekurin commented Jun 13, 2023

@sullivanmj

To achieve all of this, I was hoping to find a way to use both the serverPath and gdbTarget debug attributes, but it seems that gdbTarget is only used when the servertype is external, in which case, the serverPath is not used. This is an understandable behavior given the context of normal GDB client/server connections, but it prevents the approach I'm describing from being used.

That's what preLaunchTask is used for, to bypass that behavior and basically do what the serverPath should do. So maybe a simpler alternative to your solution would be just adding a support of using both serverPath and gdbTarget. Then a config like this should do the trick:

{
  "type": "cortex-debug",
  "name": "Launch",
  "request": "launch",
  "servertype": "theServerOfYourChoice",
  "serverPath": "/mnt/c/path/to/theServerOfYourChoice.exe",
  "serverArgs": ["--port", "61234"],
  "gdbTarget": "${env:HOSTNAME}.mshome.net:61234"
}

Additionally, if this is (or will be) required, the extension can check if serverPath is an .exe to understand where the server is run.

Is there a better way of achieving what I've described above with the existing functionality of cortex-debug, or should I open a PR?

Without using USB-IP or tasks most likely no.

@romancardenas
Copy link

Hi, folks!

I'm setting WSL2 with Cortex-Debug for a course and I was wondering what is the approach you recommend at this point. Should I...?

  • Use USB-IP and expose my board in WSL
  • or trying to do as @mpekurin mentioned and somehow running OpenOCD on windows but connecting to a GDB server running on WSL2?

I'm not expert in VSCode plugins nor WSL, but I can try to open a PR with this change

@GitMensch
Copy link
Contributor

I'm no export in anything of those but would suggest to connect to WSL2 first (with either the SSH or WSL2 extensions; either from https://github.com/jeanp413 or, if you prefer non-free "in preview" extensions with telemetry the matching one from Microsoft) and fully debug there, with the possible need to expose your board to WSL first.

@haneefdm
Copy link
Collaborator Author

haneefdm commented Oct 5, 2023

I am getting reports that WSL2 with USB-IP works very well. In this case, you would install OpenOCD/Jlink/whatever right on the Linux side and use the Linux versions.

But, make sure that your USB is working first by using various command-line tools (lsusb). You may run into firewall issues but there are workarounds for those too. Let Google be your friend. Make sure you can launch the gdb-server (OpenOCD, JLink, etc.) from the command-line first and that it recognizes your adapter/probe.

@romancardenas
Copy link

It does work pretty well! The only issue is that students need to handle the USB attachment etc., which might be a bit confusing for them.

So far I'll use the usbipd approach. If I have time I'll try to do a PR with the required changes to skip all this process and leave openOCD run on Windows while GDB is on WSL.

@haneefdm
Copy link
Collaborator Author

haneefdm commented Oct 5, 2023

@romancardenas About making the PR, you also have to handle allocating/finding the TCP ports if you are running on the Windows side. I think running the server on the Linux side is still a better option. We want things like SWO, RTT, etc. to also work. Some instructions for your students may be beneficial.

@szszoke
Copy link

szszoke commented Oct 7, 2023

I managed to work around the problem of J-Link software not being able to resolve the IP address host.docker.internal. I needed this to be able to connect to a J-Link remote server that was running on the same host as my Dev Container. Hard-coding an IP like 172.17.0.1 did not work because different host platforms have different IP addresses. This method was tested on a Linux host and on a Windows host running WSL2.

In order for this to work, you need to run docker with the following argument: --add-host=host.docker.internal:host-gateway. This argument is technically redundant on Windows and Mac, but it is needed on Linux. You can do this by adding the following todevcontainer.json:

{
  "runArgs": ["--add-host=host.docker.internal:host-gateway"]
}

This will ensure that you have host.docker.internal entry in /etc/hosts which points to the IP of your host that is running the Dev Container.

The second step is to resolve this hostname to an IP so that we can pass it to JLinkGDBServer.

This can be done with the following command:

$ getent hosts host.docker.internal
172.17.0.1      host.docker.internal

We can further clean this command up with the help of awk:

$ getent hosts host.docker.internal | awk '{ print $1 }'
172.17.0.1

Great, now we have the IP address and it is not hard-coded. Unfortunately there is obvious way to provide this IP to the debug configuration without hard-coding it so we need a different solution.

I noticed that there is a serverpath parameter which allows us to provide a custom path for JLinkGDBServer. This is what allowed me to finally make things working. I created a shell script with the following content:

#!/bin/sh

REMOTE_JLINK_IP=`getent hosts host.docker.internal | awk '{ print $1 }'`

JLinkGDBServer -select ip=$REMOTE_JLINK_IP "$@"

What this will do is to get the IP address behind the host.docker.internal entry in /etc/hosts and store it in REMOTE_JLINK_IP. I then just invoke JLinkGDBServer, pass the IP via -select ip=$REMOTE_JLINK_IP and pass any other arguments that were passed to the script via "$@".

I saved this script in my .devcontainer directory as remote-gdb-server.sh and then updated my debug configuration to use it by adding "serverpath": "./.devcontainer/remote-gdb-server.sh".

Don't forget to run chmod +x remote-gdb-server.sh.

This exact pattern can also be used in a Makefile:

REMOTE_JLINK_IP := $(shell getent hosts host.docker.internal | awk '{ print $$1 }')

One word of caution however is that $ needs to be escaped, which becomes $$.

@haneefdm
Copy link
Collaborator Author

haneefdm commented Oct 7, 2023

I wish someone volunteers to document what works in our Wiki. We could have sections for WSL, docker, etc. I can start a Wiki Page if you want but I am not good for the content. You all know way more than me.

@szszoke
Copy link

szszoke commented Oct 8, 2023

I can start a Wiki Page if you want but I am not good for the content.

If you start a Wiki page, I could contribute my previous comment as an article about using J-Link from a Docker container.

@haneefdm
Copy link
Collaborator Author

haneefdm commented Oct 8, 2023

https://github.com/Marus/cortex-debug/wiki/Working-with-WSL2-or-Docker

Feel free to re-organize the page. I just started something. Maybe we want two separate pages, we can decide later.

Long time ago, I was playing with Codespaces and here was my Docker configuration

https://github.com/haneefdm/psoc6hello/tree/master/.devcontainer

Of course not everything I have is applicable to all.

@szszoke
Copy link

szszoke commented Oct 8, 2023

I think my comment would fit in better with the things in J Link Specific Configuration, since it is highly specific to that programmer and the software that comes with it.

@haneefdm
Copy link
Collaborator Author

haneefdm commented Oct 8, 2023

Make sections and sub-sections as you please. I would expect some setup differences for various gdb-servers.

And, edit which ever page you like.

@szszoke
Copy link

szszoke commented Oct 24, 2023

I think I will be using my setup for a bit to iron out some kinks. Until then, if somebody needs help, hopefully they see my previous comment.

@rhempel
Copy link

rhempel commented Nov 10, 2023

I have had some success with the following scenario:

  • Locked down corporate Windows laptop - but with wsl and docker allowed
  • VSCode on Windows side
  • Generic project with .devcontainer and Dockerfile to set up an Ubuntu 22 container suitable for development
  • Project specific Docker volume for the actual project - works on Windows and Linux hosts and is fast, rebuilding machine doesn't kill work in progress
  • STLink detected with USB-IP and passed to WSL first, then start Docker instance in --privileged mode
  • VSCode as editor and debugger works

It works, but single stepping is painful, and I'm not sure I like the requirement to mess with USB-IP so I will check some of the comments above and rather start st-util on the host machine and connect to the gdb-server running on the host from the container where gdb is running.

Is anyone interested in my progress on this?

@jnz86
Copy link

jnz86 commented Nov 10, 2023

Is anyone interested in my progress on this?

I'm following. I'd be more interested had I not given up and just switched to linux desktop for at LEAST until this is all solved for Windows, which may be never.

@rhempel
Copy link

rhempel commented Nov 13, 2023

I have a potential solution here:

https://github.com/rhempel/docker-devcontainers/tree/main/adaptabuild-example

The difference from my original scenario is that now we run stlink-util on the Windows host, and get the correct IP address into the container /etc/hosts file using this line in the devcontainer,json file:

// This line is the key to accessing the GDB server on the host machine
// when running Docker on a Windows machine.
//
// See .vscode/launch.json for details

  "runArgs": [
    "--add-host=host.gdb.gateway:host-gateway",
  ],

In .vscode/launch.json the key is:

      "gdbTarget": "host.gdb.gateway:4242",

Note, the container is based on Ubuntu 22.04 and pulls in everything needed to build a simple application, plus Shpinx for docs and gcov. The launch.json file assumes a project called adaptabuild-example which you can build in ~/projects.

Note: you might need to change ownership on ~/projects to adaptabuild:adaptabuild to allow cloning there.

The adaptabuild-example project is here:

https://github.com/rhempel/adaptabuild-example

I'm still busy getting it polished and writing a tutorial, but please let me know if this works for you :-)

@CaptainKAZ
Copy link

https://devblogs.microsoft.com/commandline/windows-subsystem-for-linux-september-2023-update/
It seems that we can use new "mirror" networking mode for connecting gdb server running on windows and gdb client on wsl2?
I tried but openocd claims that it fails to bind tcl port.

openocd.exe -c "gdb_port 50000" -c "tcl_port 50001" -c "telnet_port 50002" -s /home/neo/Robocon2024-Embedded-BoardC -f /home/neo/.vscode-server/extensions/marus25.cortex-debug-1.12.1/support/openocd-helpers.tcl -f openocd.cfg -c "CDRTOSConfigure auto" -c CDLiveWatchSetup
xPack Open On-Chip Debugger 0.12.0+dev-01312-g18281b0c4-dirty (2023-09-04-22:32)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
CDLiveWatchSetup
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
/home/neo/.vscode-server/extensions/marus25.cortex-debug-1.12.1/support/openocd-helpers.tcl: stm32f4x.cpu configure -rtos auto
/home/neo/.vscode-server/extensions/marus25.cortex-debug-1.12.1/support/openocd-helpers.tcl: Info: Setting gdb-max-connections for target 'stm32f4x.cpu' to 2
Error: couldn't bind tcl to socket on port 50001: No error

@xpz24
Copy link

xpz24 commented Apr 2, 2024

I believe wsl, cortex debug and openocd works well together now, at least for nucleo32 boards over usb. I tried it on a Windows 11, WSL2, ubuntu 23.10, 6.2.x kernel, usbipd nightly version. Before that I updated the ST-Link firmware to the latest version.
https://steelph0enix.github.io/posts/vscode-cubemx-setup/ this guide helped a lot.

I used the openocd, customised version by STMicroelectronics, the one in ubuntu repos would work?
STMCubeMX (linux version, with latest JRE, most likely unnecessary, the bundled JRE would be ok I think) to generate the code.
stm32-for-vscode by dingen automated most of the stuff in the guide.
I used the svd file from STM website.
Programing and flashing works well, Debugging also works well I say, can see the variables, peripherals, registers and memory.

All possible from wsl side, only windows program I had to install was usbipd.

@AaronFontaine-DojoFive
Copy link

AaronFontaine-DojoFive commented Apr 2, 2024

@xpz24, I think that much of the discussion here has been concerned with how to bridge out of WSL for the debugger. A lot of that has focused on getting the gdb server launched on the Windows side and then getting a WSL-side gdb client connected to it. I now think the better approach is, as you say, to use usbipd. There is now even a GUI for managing this. Golioth has a good article about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests