Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stty command not executed #90

Closed
matttbe opened this issue Mar 14, 2024 · 6 comments
Closed

stty command not executed #90

matttbe opened this issue Mar 14, 2024 · 6 comments

Comments

@matttbe
Copy link
Contributor

matttbe commented Mar 14, 2024

Hello,

First, thank you for having continued the development of virtme. The new version is great, and the migration to it is not too difficult because the virtme-* commands are still there! (I hope it is OK to use them directly).

I'm using vng 1.22 installed from pip, in a Docker with Ubuntu 23.10 as a base.

I used virtme-configkernel to generate the config. I built the kernel myself (because I want to do that in .virtme/build dir, and build other stuff).

Then I launched it: (I replaced the current dir with ${PWD} to make it more readable)

# virtme-run --arch x86_64 --name mptcpdev --memory 2048M --kdir ${PWD}/.virtme/build --mods=auto --rwdir . --pwd --verbose --show-command --show-boot-console --kopt mitigations=off --cpus 2
virtme: waiting for virtiofsd to start
virtme: use 'microvm' QEMU architecture
/usr/bin/qemu-system-x86_64 -name mptcpdev -m 2048M -chardev socket,id=charvirtfs5,path=/tmp/virtmedv4v92fa -device vhost-user-fs-device,chardev=charvirtfs5,tag=ROOTFS -object memory-backend-memfd,id=mem,size=2048M,share=on -numa node,memdev=mem -fsdev local,id=virtfs13,path=.,security_model=none,multidevs=remap -device virtio-9p-device,fsdev=virtfs13,mount_tag=virtme.initmount0 -machine accel=kvm:tcg -M microvm,accel=kvm,pcie=on -cpu host -parallel none -net none -echr 1 -serial none -chardev file,path=/proc/self/fd/2,id=dmesg -device virtio-serial-device -device virtconsole,chardev=dmesg -chardev stdio,id=console,signal=off,mux=on -serial chardev:console -mon chardev=console -vga none -display none -smp 2 -kernel ${PWD}/.virtme/build/arch/x86/boot/bzImage -append 'virtme_hostname=mptcpdev nr_open=1048576 virtme_link_mods=${PWD}/.virtme/build/.virtme_mods/lib/modules/0.0.0 virtme_initmount0=${PWD/\//} console=hvc0 earlyprintk=serial,ttyS0,115200 virtme_console=ttyS0 psmouse.proto=exps "virtme_stty_con=rows 61 cols 123 iutf8" TERM=xterm virtme_chdir=${PWD/\//} virtme_root_user=1 rootfstype=virtiofs root=ROOTFS raid=noautodetect ro mitigations=off init=/usr/local/lib/python3.11/dist-packages/virtme/guest/bin/virtme-ng-init'

(...)

[    2.431226] Run /usr/local/lib/python3.11/dist-packages/virtme/guest/bin/virtme-ng-init as init process
[    2.431372]   with arguments:
[    2.431501]     /usr/local/lib/python3.11/dist-packages/virtme/guest/bin/virtme-ng-init
[    2.431657]   with environment:
[    2.431742]     HOME=/
[    2.431800]     TERM=xterm
[    2.431862]     virtme_hostname=mptcpdev
[    2.431940]     nr_open=1048576
[    2.432027]     virtme_link_mods=${PWD}/.virtme/build/.virtme_mods/lib/modules/0.0.0
[    2.432130]     virtme_initmount0=${PWD/\//}
[    2.432208]     virtme_console=ttyS0
[    2.432255]     virtme_stty_con=rows 61 cols 123 iutf8
[    2.432308]     virtme_chdir=${PWD/\//}
[    2.432386]     virtme_root_user=1
[    2.440293] virtme-ng-init: mount devtmpfs -> /dev: EBUSY: Device or resource busy
[    2.472653] virtme-ng-init (48) used greatest stack depth: 14088 bytes left
[    2.472844] virtme-ng-init: WARNING: failed to run: "systemd-tmpfiles" --create --boot --exclude-prefix=/dev --exclude-prefix=/root
[    2.473603] virtme-ng-init: unable to find udevd, skip udev.
[    2.478890] ip (49) used greatest stack depth: 12320 bytes left
[    2.487662] virtme-ng-init: initialization done

(...)

   kernel version: 6.8.0+ x86_64
   (CTRL+d to exit)

root@mptcpdev:${PWD}# 

But the serial is difficult to use: max 80 chars, and then it goes back to the same line. It looks like the init program didn't do its job (but initialization done was printed):

root@mptcpdev:${PWD}# PS1="~ "  ## because that's hard to type stuff :)
~ echo $COLUMNS
80
~ stty ${virtme_stty_con} <"/dev/${virtme_console}"
~ echo $COLUMNS
123

also, while at it, it looks like XDG_RUNTIME_DIR exists, but the directory doesn't exist:

~ ls $XDG_RUNTIME_DIR
ls: cannot access '/run/user/0': No such file or directory

If I launch virtme-run with --no-virtme-ng-init, it is better:

root@mptcpdev:${PWD}# echo $COLUMNS
123
root@mptcpdev:${PWD}# echo $XDG_RUNTIME_DIR
/run/user/0
root@mptcpdev:${PWD}# ls $XDG_RUNTIME_DIR 

Am I wrong launching virtme-run?

@arighi
Copy link
Owner

arighi commented Mar 14, 2024

Hi, thanks for reporting all of this.

You can definitely use virtme-run, but usually it's much easier to use vng. 😄 They should work in the same way. vng is actually calling virtme-run.

About stty both virtme-init and virtme-ng-init should call the same command, weird that you see different behaviors, I'm wondering if there's an issue with the path? Can you check where is the stty binary in your environment?

About XDG_RUNTIME_DIR it seems to be created in my case:

$ vng -r
          _      _
   __   _(_)_ __| |_ _ __ ___   ___       _ __   __ _
   \ \ / / |  __| __|  _   _ \ / _ \_____|  _ \ / _  |
    \ V /| | |  | |_| | | | | |  __/_____| | | | (_| |
     \_/ |_|_|   \__|_| |_| |_|\___|     |_| |_|\__  |
                                                |___/
   kernel version: 6.8.0-13-generic x86_64
   (CTRL+d to exit)

6.8.0-13-generic ~
$ stat $XDG_RUNTIME_DIR 
  File: /run/user/1000
  Size: 40        	Blocks: 0          IO Block: 4096   directory
Device: 0,26	Inode: 324         Links: 2
Access: (0755/drwxr-xr-x)  Uid: ( 1000/  arighi)   Gid: (    0/    root)
Access: 2024-03-14 18:27:53.949976892 +0100
Modify: 2024-03-14 18:27:53.949976892 +0100
Change: 2024-03-14 18:27:53.949976892 +0100
 Birth: 2024-03-14 18:27:53.949976892 +0100

Do you have /run/ in your host? (actually inside your docker container)

matttbe added a commit to multipath-tcp/mptcp-upstream-virtme-docker that referenced this issue Mar 14, 2024
'virtme' is no longer maintained, we had to fork it to fix issues [1].

The new 'virtme-ng' brings fixes, new features, and performances
improvement, especially for the disk IO [2].

Note that three workarounds are currently needed:
 - The new init in rust cannot be used [3].
 - The .config needs to be removed [4].
 - The whole rootfs is mounted in rw instead of only the workdir to be
   able to used virtiofs (faster) instead of 9p. But that's OK, our
   scripts are launched from a docker where only the workdir is mounted.
   Maybe even easier like that.

There are new dependences, but it is still light. 'udev' has been added
to avoid a warning, but I don't think we need it.

Link: https://github.com/matttbe/virtme [1]
Link: https://lwn.net/Articles/951313/ [2]
Link: arighi/virtme-ng#90 [3]
Link: https://github.com/arighi/virtme-ng/pulls/91 [4]
Closes: multipath-tcp/mptcp_net-next#472
Signed-off-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
@matttbe
Copy link
Contributor Author

matttbe commented Mar 14, 2024

Thank you for your quick reply!

You can definitely use virtme-run, but usually it's much easier to use vng. 😄 They should work in the same way. vng is actually calling virtme-run.

I had scripts using the old virtme, and directly using virtme-run. It was easier for me to keep using these commands, not to change the options :)
(Not a lot of modifications to switch to virtme-ng: multipath-tcp/mptcp-upstream-virtme-docker@0c54a94)

About stty both virtme-init and virtme-ng-init should call the same command, weird that you see different behaviors, I'm wondering if there's an issue with the path? Can you check where is the stty binary in your environment?

From the docker, I have:

# which stty
/usr/bin/stty

I guess it is correct, no?
Also, the same command works when I launch stty from the serial.

Please also note that if I run vng -r from my system (Ubuntu Noble, kernel 6.8.0-11, not the same as yours), I have the same issue:

$ vng -r
          _      _
   __   _(_)_ __| |_ _ __ ___   ___       _ __   __ _
   \ \ / / |  __| __|  _   _ \ / _ \_____|  _ \ / _  |
    \ V /| | |  | |_| | | | | |  __/_____| | | | (_| |
     \_/ |_|_|   \__|_| |_| |_|\___|     |_| |_|\__  |
                                                |___/
   kernel version: 6.8.0-11-generic x86_64
   (CTRL+d to exit)

$ echo $COLUMNS
80

(or it is disturbed somehow because zsh is used?)

Do you have /run/ in your host? (actually inside your docker container)

Good point, no I don't:

# ls /run/
adduser  lock/    mount/   systemd/

No worry then!

@arighi
Copy link
Owner

arighi commented Mar 14, 2024

I had scripts using the old virtme, and directly using virtme-run. It was easier for me to keep using these commands, not to change the options :) (Not a lot of modifications to switch to virtme-ng: multipath-tcp/mptcp-upstream-virtme-docker@0c54a94)

Ah ok, if you have scripts that are using virtme-run it makes sense to keep using that, I want to maintain everything 100% backward compatible, so virtme-run will keep working as before (if it doesn't then it's a bug).

About stty both virtme-init and virtme-ng-init should call the same command, weird that you see different behaviors, I'm wondering if there's an issue with the path? Can you check where is the stty binary in your environment?

From the docker, I have:

# which stty
/usr/bin/stty

I guess it is correct, no? Also, the same command works when I launch stty from the serial.

Yep, that seems correct. I'll investigate more.

Please also note that if I run vng -r from my system (Ubuntu Noble, kernel 6.8.0-11, not the same as yours), I have the same issue:

$ vng -r
          _      _
   __   _(_)_ __| |_ _ __ ___   ___       _ __   __ _
   \ \ / / |  __| __|  _   _ \ / _ \_____|  _ \ / _  |
    \ V /| | |  | |_| | | | | |  __/_____| | | | (_| |
     \_/ |_|_|   \__|_| |_| |_|\___|     |_| |_|\__  |
                                                |___/
   kernel version: 6.8.0-11-generic x86_64
   (CTRL+d to exit)

$ echo $COLUMNS
80

(or it is disturbed somehow because zsh is used?)

It shouldn't be a problem, I'll investigate a bit.

Do you have /run/ in your host? (actually inside your docker container)

Good point, no I don't:

# ls /run/
adduser  lock/    mount/   systemd/

No worry then!

Maybe we should just create /run if it doesn't exist? or redirect it to a different path? I'll think about it.

@arighi
Copy link
Owner

arighi commented Mar 14, 2024

I had scripts using the old virtme, and directly using virtme-run. It was easier for me to keep using these commands, not to change the options :) (Not a lot of modifications to switch to virtme-ng: multipath-tcp/mptcp-upstream-virtme-docker@0c54a94)

And it's pretty cool to see that other projects are moving to virtme-ng, thanks for sharing this! 😄

arighi added a commit to arighi/virtme-ng-init that referenced this issue Mar 15, 2024
Make sure to redirect stdout/stderr to the right console before
applying the proper terminal settings via stty.

Link: arighi/virtme-ng#90
Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
arighi added a commit that referenced this issue Mar 15, 2024
Update the virtme-ng-init submodule to include the following fixes:
 - virtme-ng-init: properly configure terminal line settings
 - init: set the HOME env var if root

This allows to fix issue #90.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
@arighi
Copy link
Owner

arighi commented Mar 15, 2024

@matttbe can you check if f9c3692 fixes the stty issue? Thanks!

@matttbe
Copy link
Contributor Author

matttbe commented Mar 15, 2024

@arighi yes, it fixes my issue in both the container and on my system! Thank you!

@matttbe matttbe closed this as completed Mar 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants