Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make lists -j32 doesn't seem to be honoring the thread count. (Also happens when calling make training -j32) #382

Open
ipaqmaster opened this issue Apr 1, 2024 · 3 comments

Comments

@ipaqmaster
Copy link

Hi team,

I'm training a model on some font with START_MODEL=eng and while the resulting .traineddata can correctly guess a lot of things with the font there are some which trip it up. It was only trained on a couple thousand lines.

To try and lazily solve this problem I'm trying again with far more training lines than previously. (160k; very overkill and likely pointless cycles to train).

During make training I've noticed that many preparation steps take place in parallel but it seems the lists step is calling tesseract data/font-ground-truth/abc_00001.tif data/font-ground-truth/abc_00001 --psm 13 lstm.train one .tif at a time.

In my limited experience with this software this seems like a step that could be run concurrently and would help speed up the initial data preparation step in getting to the actual training part of the process without having to resort to scripting.

Is it possible to make this training preparation step run in parallel with multiple -jxx jobs?

@ipaqmaster
Copy link
Author

Worked around this with the below bash scripting to speed things up:

cd tesstrain

# Generate .box's
find data/*ground-truth/ -type f -name '*.tif' | while read line ; do [ ! -f "${line/.*/}.box" ] && echo "PYTHONIOENCODING=utf-8 python3 generate_line_box.py -i \"${line}\" -t \"${line/.*/}.gt.txt\" > \"${line/.*/}.box\"" ; done | parallel -j$(nproc)


# Generate .lstmf's
find data/*ground-truth/ -type f -name '*.tif' | while read line ; do [ ! -f "${line/.*/}" ] && [ -f "${line/.*/}.box" ] && [ ! -f "${line/.*/}.lstmf" ] && echo "tesseract \"${line}\" ${line/.*/} --psm 13 lstm.train" ; done | parallel -j$(nproc)

@stweil
Copy link
Collaborator

stweil commented Apr 10, 2024

I always used make -j for parallel builds of box and lstmf files, and it worked fine (with png images instead of tiff, but that should not matter). Meanwhile I have an even better alternative which no longer requires box and lstmf files.

@yaofuzhou
Copy link

Hi - not necessarily the answer you were looking for, but Tesstrain is essentially a wrapper to help you run a sequence of Tesseract binaries with hopefully the correct parameters. Here is my way to significantly speed up the development process -

  1. Use GPT/Claude to decompose the Tesstrain makefile into a series of components:
  • A master Makefile
  • A config.mk to store all parameters that can be included by the various components
  • unicharset.mk for make unicharset
  • lists.mk for make lists
  • training.mk for make training
  • and perhaps a misc.mk for the rest
  1. Understand what each component does. Ask GPT/Claude to explain to you if needed.
  2. Translate the core task of each .mk component to Python, which GPT/Claude is much better at. Have, say, unicharset.mk to call Python unicharset.py to execute the same tasks.
  3. Identify in each .py what tasks are parallelizable, and ask GPT/Claude to modify the code to leverage multithreading or multiprocessing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants