Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data "D2-Net preprocessed images" is unavailable #265

Open
WallofWonder opened this issue Apr 25, 2023 · 10 comments
Open

Data "D2-Net preprocessed images" is unavailable #265

WallofWonder opened this issue Apr 25, 2023 · 10 comments

Comments

@WallofWonder
Copy link

404 is reported.
屏幕截图 2023-04-25 152413
屏幕截图 2023-04-25 152442

@trand2k
Copy link

trand2k commented May 9, 2023

i have same issue

@nonathecoda
Copy link

how did you solve the issue? @trand2k @WallofWonder

@WallofWonder
Copy link
Author

still waiting for a solution

@nonathecoda
Copy link

they actually updated the readme, in the FAQ you can read that they recommend to leave the d2net dataset away.

@QiuhangLiu
Copy link

they actually updated the readme, in the FAQ you can read that they recommend to leave the d2net dataset away.

Do you know how to train this with only Megadepth? Like how we build the symlink then?

@nonathecoda
Copy link

I think you can just leave the D2net part away and symlink only the megadepth dataset. But i havent tried training it yet. Btw, which cloud gpu service will you use to train it?

@QiuhangLiu
Copy link

Thank you for your answering. I just linked the magadepth and it still got some error. For the gpus, I followed the guidance by authors(thay said 4 gpus can run) with 4 gpus of A5000, 24GB RAM.

@nonathecoda
Copy link

what error do you get? I started training some minutes ago, but didn't get an error yet.

@QiuhangLiu
Copy link

what error do you get? I started training some minutes ago, but didn't get an error yet.

Hi, thanks so much for replying. Did you train by Scanner or Megadepth? How did you set the training process? I used megadepth but the undistorted data from D2Net is unavailable, I downloaded the original Megadepth SfM datasets(600+GB) to replace that dataset.

My error shows:
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data\dataloader.py", line 1225, in _process_data
data.reraise()
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch_utils.py", line 429, in reraise
raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "D:\soft\Anaconda3\envs\loftr2\lib\site-packages\torch\utils\data\dataset.py", line 219, in getitem
return self.datasets[dataset_idx][sample_idx]
File "D:\LoFTR-master\src\datasets\megadepth.py", line 75, in getitem
image0, mask0, scale0 = read_megadepth_gray(
File "D:\LoFTR-master\src\utils\dataset.py", line 109, in read_megadepth_gray
w, h = image.shape[1], image.shape[0]
AttributeError: 'NoneType' object has no attribute 'shape'

My whole process is like this:

  1. Environment settings (exactly like guidance)
  2. Download Megadepth v1(200GB) and Megadepth SfM (600GB)
  3. Build symlinks:(I rename the megadeath SfMdataset to “Undistorted_SfM”)
    ln -sv /path/to/megadepth/phoenix /path/to/megadepth_d2net/Undistorted_SfM /path/to/LoFTR/data/megadepth/train
    ln -sv /path/to/megadepth/phoenix /path/to/megadepth_d2net/Undistorted_SfM /path/to/LoFTR/data/megadepth/test
    ln -s /path/to/megadepth_indices/* /path/to/LoFTR/data/megadepth/index
  4. Run
    bash scripts/reproduce_train/outdoor_ds.sh

Have you done any other process besides this?

Btw, May I ask what gpus did you use? I use 4 A5000 with 24GB memories each. Could the error caused by the equipments?

PS: Could you please add my WhatsApp (+65 93512175) for further contact? We could discuss some more details.

I am really appreciate your kind reply. (Got no one in lab research same topic😭)

@nonathecoda
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants