Skip to content

Releases: lightvector/KataGo

Minor fixes, restore support for TensorRT 8.5

10 Mar 21:32
Compare
Choose a tag to compare

If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here. Also, download the latest neural nets to use with this engine release at https://katagotraining.org/.

KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!

Summary and Notes

This is primarily a bugfix release. If you're contributing to distributed training for KataGo, this release also includes a minor adjustment to the bonuses that incentivize KataGo to finish the game cleanly, which might slightly improve robustness of training.

Both this and the prior release support an upcoming larger and stronger "b28" neural net that is currently being trained and will likely be ready soon!

As a reminder, for 9x9 boards, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.

Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.

The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.

Changes in v1.14.1

  • Restores support for TensorRT 8.5. Although the precompiled executables are still for TensorRT 8.6 and CUDA 12.1, if you are building from source TensorRT 8.5 along with a suitable CUDA version such as 11.8 should work as well. Thanks to @hyln9 - #879
  • Changes ending score bonus to not discourage capture moves, encouraging selfplay to more frequently sample mild resistances and and refute bad endgame cleanup.
  • Python neural net training code now randomizes history masking, instead of using a static mask that is generated at data generation time. This should very slightly improve data diversity when reusing data rows.
  • Python neural net training code now will clear out nans from running training statistics, so that the stats can remain useful if a neural net during training experiences an exploded gradient but still manages to recover from it.
  • Various minor cleanups to code and documentation, including a new document about graph search.

Support upcoming larger "b28" nets and lots of bugfixes

28 Dec 05:12
Compare
Choose a tag to compare

If you're a new user, this section has tips for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), see here. Also, download the latest neural nets to use with this engine release at https://katagotraining.org/.

KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!


Note for CUDA and TensorRT: starting with this release newer versions are required!

  • The CUDA version requires CUDA 12.1.x and CUDNN 8.9.7. CUDA 12.1.1 in particular was used for compiling and testing. For CUDA, using a more recent version should work as well. Older versions might work too, but even if they do work, upgrading from a much older version might give a small performance improvement.
  • The TensorRT version requires precisely CUDA 12.1.x and TensorRT 8.6.1 ("TensorRT 8.6 GA"). CUDA 12.1.1 in particular was used for compiling and testing.
  • Note that CUDA 12.1.x is used even though it is not the latest CUDA version because TensorRT does not yet support CUDA 12.2 or later! So for TensorRT, the CUDA version must not be upgraded beyond that.

Summary and Notes

This release adds upcoming support for a larger and stronger "b28" neural net that is currently being trained and will likely be ready within the next couple of months! This release also fixes a lot of minor bugs and makes a lot of minor improvements.

As a reminder, see here for a special neural net better than any other net on 9x9, which was used to generate the 9x9 opening books at katagobooks.org.

Available below are both the standard and "bs29" versions of KataGo. The "bs29" versions are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so use them only when you really want to try large boards.

The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.

Changes in v1.14.0

New features

  • Added support for a new "v15" model format that adds a nonlinearity to the pass policy head. This change is required for the new larger b28c512nbt neural net that should be ready in the next few months and might become the strongest neural net to use for top-tier GPUs.

Engine improvements

  • KataGo analysis mode now ignores history prior to the root (except still obeying ko/superko)! This means analysis will no longer be biased by placing stones in an unrealistic ordering when setting up an initial position, or exploring game variations when both players play very bad moves. Pre-root history is still used when KataGo is playing rather than analyzing because it is presumed that KataGo played the whole game as the current player and chose the moves it wanted - if this is not true, see analysisIgnorePreRootHistory and ignorePreRootHistory in the config.
  • Eigen version of KataGo now shares the neural net weights for all threads instead of copying it - this should greatly reduce memory usage when running with multiple threads/cores.
  • TensorRT version of KataGo now has a cmake option USE_CACHE_TENSORRT_PLAN for custom compiling that can give faster startup times for TensorRT backend at the cost of some disk space (thanks to kinfkong). Do NOT use this for self-play or training, it will use excessive disk space over time and increase the cost of each new neural net. The ideal use case is using only one or a few nets for analysis/play over and over.

Main engine bugfixes

  • Fixed bug where KataGo would not try to claim a win under strict scoring rules when forced to analyze a position past when the game should have already ended, and would assume the opponent would not either.
  • Fixed bad memory access that might cause mild bias to behavior in filling dame in Japanese rules.
  • Fixed issue where when contributing selfplay games to distributed training, if the first web query to katagotraining.org fails the entire program would fail instead of retrying the query like it would retry any web queries thereafter.
  • Fixed some multithreading races - avoid any copying of child nodes between arrays during search.
  • Fixed bug in parsing certain malformed configs with multiple GPUs specified.
  • Fixed bug in determining the implicit player to move on the first turn of an SGF with setup stones.
  • Fixed some bugs in recomputing root policy optimism when differing from tree policy optimism in various cases, or when softmax temperature or other parameters differ after pondering.
  • Fixed some inconsistencies in how Eigen backend number of threads was determined.
  • Shrank the default batch size on Eigen backend since batching doesn't help CPUs much, should make more efficient use of cores with fewer threads now.
  • Minor internal code cleanups involving turn numbers, search nodes, and other details. (thanks nerai)

Expert/dev tool improvements

  • Tools
    • Added bSizesXY option to control exact board size distribution including rectangles for selfplay or match commands, instead of only an edge length distribution. See match_example.cfg.
    • Improved many aspects of book generation code and add more parameters to it that were used for the 9x9 books at katagobooks.org
    • The python summarize_sgfs.py tool now outputs stats that can identify rock-paper-scissors situations in the Elos.
    • Added experimental support for dynamic komi in internal test matches.
    • Various additional arguments and minor changes and bugfixes to startpos/hintpos commands.
  • Selfplay and training
    • By default, training models will now use a cheaper version of repvgg-linear architecture that doesn't actually instantiate the inner 1x1 convolution, but instead adjusts weights and increases the LR on the central square of a 3x3 conv. This change only applies to newly initialized models - existing models will keep the old and slower-training architecture.
    • Modernized all the various outdated selfplay config parameters, added readme for them
    • Minor (backwards-compatible) adjustments to training data NPZ format, made to better support experimental conversion of human games to NPZ training data.
    • Improve shuffle.py and training.py defaults and -help documentation. E.g. cd python; python shuffle.py -help.
    • Various other minor updates to various docs
    • Improve and slightly rearrange synchronous loop logic

Expert/dev tool bugfixes

  • Fixed wouldBeKoCapture bug in python board implementation.
  • Fixed bug where trainingWeight would be ignored on local selfplay hintposes.
  • Now clears export cycle counter when migrating a pytorch model checkpoint to newer versions.
  • Fixed minor bugs updating selfplay file summarize and shuffle script args.
  • Various other minor bugfixes to dev commands and python scripts for training.

Finetuned 9x9 Neural Net

26 Oct 01:35
Compare
Choose a tag to compare
Pre-release

Marking and leaving this as a 'prerelease' since this is NOT intended to be release of a new version of KataGo's source code or binaries, but is a release of a new neural net for KataGo!
For the latest binaries and code, see v1.14.0: https://github.com/lightvector/KataGo/releases/tag/v1.14.0

This is a release of a neural net specially trained for 9x9! On 9x9 boards specifically, this neural net is overall much stronger than KataGo's main distributed training nets on katagotraining.org.

Training

It was finetuned from KataGo's main run nets by data generated from 3 strong personal GPUs for several months on 9x9 games starting in many diverse positions. A large number of 9x9 positions were sampled from various datasets to provide these starting positions:

  • The move tree of an earlier failed attempt at generating a 9x9 opening book earlier this year that despite not having good evaluations, extensively covered a wide variety of 9x9 openings.
  • Manually-identified blind spot positions.
  • Top-level bot games from CGOS.
  • Human professional and amateur game collections on 9x9.
  • Collections of match games won or lost (but not drawn) by KataGo on 9x9 against other versions of itself, to focus more learning on decisive positions.
  • 9x9 games played between versions of KataGo where one side was heavily computationally advantaged against the other.
  • Various manually specified openings and handicap stone patterns.
  • A tiny number of 7x7 through 11x11 games so that the net didn't entirely forget a basic sense of scaling between board sizes.

Otherwise, the training proceeded mostly the same as KataGo's main run, with essentially the same settings.

Strength

In some fast-game tests with a few hundred playouts per move, this net has sometimes rated as much as 200 Elo stronger on 9x9 boards than KataGo's main run neural nets when using a diverse set of forced openings.

However, it's not easy to be precise because the exact amount can depend a lot on settings and the particular forced opening mixture used for games. For any particular opening or family of openings on 9x9, at top levels you can often get major nonlinearities or nontransitivities between various bots depending on what openings they just so happen to play optimally or not. This is especially the case when having bots just play from the empty board position rather than using a predetermined book or opening mixture, since the games will often severely lack opening diversity.

Also, for 9x9 because bots are strong enough that the game is highly drawish (at fair komi), the Elo difference can depend heavily on the number of visits used, as both sides approach optimal with more visits and draw increasingly frequently with fewer decisive matches.

Overall though, the net generally seems more accurate and efficient at judging common 9x9 positions.

Other Notes and Caveats

  • Don't use this net on board sizes other than 9x9! At least, don't do so while expecting it to be good, you could still do so for fun. It will in fact run on 19x19, but its evaluation quality on 19x19 has degraded in quality and drifted to be offset from fair a lot due to having months of training forgetting about 19x19 and repurposing its capacity for 9x9. It also seems to have forgotten some important joseki lines. It probably will have gotten worse at large-scale fights or big dragons as well.

  • Since it is a different net with randomly different blind spots and quirks, even on size 9x9, this finetuned net probably also has a small proportion of variations that it evaluates or plays worse than KataGo's main run nets. On average it should be much better, but of course it will not always be better.

  • One fun feature is that this net also has a little bit of training for 9x9 handicap games, including the "game" where white has a 78.5 komi** while black has 4 or 5 handicap stones, such that white wins if they live basically anywhere. This training did not reach convergence, but enough that if you try searching with a few millions of playouts, the results are pretty suggestive that white can live if black starts with all four 3-3 points, but not if black gets a fifth stone anywhere reasonable.

(**Area scoring, with 0 bonus for handicap stones as in New Zealand or Tromp-Taylor rules. If you use Chinese rules, you'll need a lower komi due to the extra compensation of N points for N handicap stones, and if you use Japanese rules you'll need a lower komi since the black stones themselves occupy space and reduce the territory. Also leaving a buffer of a few points from 9x9 = 81, like choosing 78.5 instead of 80 or 80.5 is a good idea so that the net is solidly in "if I make a living group I win" and is well separated away from "actually I always win even if I lose the whole board")

Minor test/build fixes

25 May 19:17
Compare
Choose a tag to compare

This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!

This release v1.13.2 fixes some automated tests for Homebrew or other automated builds. It doesn't involve relevant changes for ordinary users, so please see release v1.13.0 for the older release and getting started with KataGo and for info on many changes and improvements in v1.13.x! And/or for TensorRT users see v1.13.1.

Although there are no new builds offered for download on this page, if you're building from source this tag v1.13.2 is still a fine version to build from.

TensorRT bugfix

24 May 14:45
Compare
Choose a tag to compare

This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!

This is a quick bugfix release for specific to the TensorRT version of KataGo, which fixes the plan cache to avoid naming conflicts with older versions and improve error checking, which may affect some users who build the TensorRT version from source.

Better models and search and training, many improvements

23 May 04:11
Compare
Choose a tag to compare

This release is outdated, see https://github.com/lightvector/KataGo/releases/tag/v1.14.0 for a newer release!

For the TensorRT version, download it from v1.13.1 which is a quick bugfix release specific to the TensorRT version and which should only matter for users doing custom builds, but for clarity has been minted as a new release.

You can find the latest neural nets at https://katagotraining.org/. This release also features a somewhat outdated but stronger net using a new "optimistic policy" head in v1.13.0, attached below, the latest nets at katagotraining.org will also start including this improvement soon.

Attached here are "bs29" versions of KataGo. These are just for fun, and don't support distributed training but DO support board sizes up to 29x29. They may also be slower and will use much more memory, even when only playing on 19x19, so you should use them only when you really want to try large boards.

The Linux executables were compiled on a 20.04 Ubuntu machine. Some users have encountered issues with libzip or other library compatibility issues in the past. If you have this issue, you may be able to work around it by compiling from source, which is usually not so hard on Linux, see the "TLDR" instructions for Linux here.

Changes in v1.13.0

Modeling improvements

  • Optimistic policy - improved policy head that is biased to look more for unexpectedly good moves. A one-off neural net using this policy head is attached below, KataGo's main nets at https://katagotraining.org/ will begin including the new head soon as well.

  • Softplus error scaling - supports new squared softplus activations for value and score error predictions, as well as adjusted scaling of gradients and post-activation for those predictions, which should fix some rare outliers in overconfidence in these predictions as well as large prediction magnitudes that might result in less-stable training.

Search improvements

  • Fixed a bug with determining the baseline top move at low playouts for policy target pruning, which could cause KataGo at low playouts on small boards to sometimes play extremely bad moves (e.g. the 1-1 point).

  • For GTP and analysis, KataGo will automatically cap the number of threads at about 1/8th the number of playouts being performed, to prevent the worst cases of accidentally misconfiguring KataGo to use many threads being destructive to search quality when testing KataGo with low settings. To override this, you can set the config parameter minPlayoutsPerThread.

KataGo GTP/match changes

These are changes relevant to users running bots online or katago internal test matches.

  • Added support for automatically biasing KataGo to avoid moves it played recently in earlier games, giving more move variety for online bots. See the "Automatic avoid patterns" section in cpp/configs/gtp_example.cfg.

  • Updated the behavior of ogsChatToStderr=true for gtp2ogs version 8.x.x (https://github.com/online-go/gtp2ogs), for running KataGo on OGS.

  • Added a new config parameter gtpForceMaxNNSize that may reduce performance on small boards, but avoids a lengthy initialization time when changing board sizes, which is necessary for clients that may toggle the board size on every turn, such as gtp2ogs 8.x.x's pooling manager.

  • Fixed a segfault with extraPairs when using katago match to run round-robin matches (#777), removed support for blackPriority pairing logic, and added extraPairsAreOneSidedBW to allow one-sided colors for matches.

Analysis engine

Python code and training script changes

These are relevant to users running training/selfplay. There are many minor changes to some of the python training scripts and bash scripts this release. Please make backups and test carefully if upgrading your training process in case anything breaks your use case!

  • Model configs now support version "12" (corresponding to optimistic policy above) and "13" and "14" (corresponding to softplus error scaling above). Experimental scripts migrate_optimistic_policy.py and migrate_softplus_fix.py and migrate_squared_softplus.py are provided in python/ for upgrading an old version "11" model. You will also need to train more if you upgrade, to get the model to re-converge.

  • The training python code python/train.py now defaults to using a lot of parameters that KataGo's main run was using and that were tested to be effective, but that were NOT default before. Be advised that upgrading to v1.13.0 with an existing training run may change various parameters due to using the new defaults, possibly improving them, but nonetheless changing them.

  • Altered the format of the summary json file output by python/summarize_old_selfplay_files.py and which is called by the shuffler script in python/selfplay/shuffle_loop.sh to cache data and avoid searching every directory on every shuffle. The new format now tracks directory mtimes, avoiding some cases where it might miss new data. For existing training runs, the new scripts should seamlessly load the old format and upgrade it to the new format, however, after having done so, pre-v1.13.0 training code will no longer be able to read that new format if you then try to downgrade again.

  • Rewrote python/selfplay/synchronous_loop.sh to copy and run everything out of a dated directory to avoid concurrent changes to the git repo checkout affecting an ongoing run, and also improved it to use a flag -max-train-bucket-per-new-data and other flags to better prevent overfitting without having to so carefully balance games/training epochs size.

  • Overhauled documentation on selfplay training to be current with the new pytorch training introduced earlier in releases v1.12.x and to also recommend use of -max-train-bucket-per-new-data and related parameters that were not previously highlighted, which give much easier control over the relative selfplay vs training speed.

  • Removed confusing logic in the C++ code to split out part of its data as validation data (maxRowsPerValFile and validationProp parameters in selfplay cfg files no longer exist). This was not actually used by the training scripts. Instead, the shuffle script python/selfplay/shuffle.sh continues to do this with a random 5% of files, at the level of whole npz data files. This can be a bit chunky if you have too few files, to disable this behavior and just train on all of the data, pass the environment variable SKIP_VALIDATE=1 to shuffle.sh.

  • Removed support for self-distillation in python/train.py.

  • Significantly optimized shuffling performance for large numbers of files in python/shuffle.py.

  • Fixed a bug in shuffler in internal file naming that prevented it from shuffling .npz files that were themselves produced by another shuffling.

  • Fixed a bug in python/train.py where -no-repeat-files didn't always prevent repeats.

  • Selfplay process now accepts hintpos files that end in .bookposes.txt and .startposes.txt rather than only .hintposes.txt.

  • Removed unnecessary/unused and outdated copy of sgfmill from this repo. Install it via pip again if you need it.

  • Standardized python indentation to 4 spaces.

  • Various other flags and minor cleanups for various scripts.

Training logic changes

  • KataGo now clamps komi less aggressively when initializing the rules training, allowing for more games to teach the net about extreme komi.

  • Added a few more bounds on recorded scores for training.

Book generation changes

These are relevant to users using katago genbook to build opening books or tsumego variation books. See cpp/configs/book/genbook7jp.cfg for an example config.

  • Added some new config prameters bonusPerUnexpandedBestWinLoss and earlyBookCostReductionFactor and earlyBookCostReductionLambda for exploring high-value unexplored moves, and for expanding more bad early moves for exploring optimal play after deliberately bad openings.

  • Added support for expanding multiple book nodes per search, which should be more efficient for generating large books. See new parameters minTreeVisitsToRecord etc. in the example config.

  • Added some other minor book-specific search parameters.

  • Fixed a bug where the book would report nonsensica...

Read more

Various bugfixes for search and training

18 Feb 05:10
Compare
Choose a tag to compare

This release is not the latest release, see newer release v1.13.0!

New Neural Net Architecture Support (release series v1.12.x)

The same as prior releases in the v1.12.x series, this release KataGo has recently-added support a new neural net architecture! See the release notes for v1.12.0 for details! The new neural net, "b18c384nbt" is also attached in this release for convenience, for general analysis use it should be similar in quality to recent 60-block models, but run significantly faster due to being a smaller net. For other recent trained nets, download them from https://katagotraining.org/.

What's Changed in v1.12.4

This specific release v.1.12.4 addresses a variety of small bugs or behavioral oddities in KataGo that should improve some rare issues for analysis, and improve the training data:

  • Added a crude hack to mitigate an issue where in positions with a large misevaluation by the raw net that normally search could fix, if the search happened to try the unlikely move of passing, the opponent passing in response could prevent or greatly delay the search from converging to the right evaluation. Controlled by new config parameter enablePassingHacks, which defaults to true for GTP and analysis and false elsewhere.
  • Changed the search to be more aware of the difference between computer-like rulesets that require capturing stones before ending of the game and human-like rulesets that don't in when the game end is triggered within variations of the search itself. This and the above passing hack are intended to address a rare behavior oddity newly discovered in recent KataGo versions in the last week or two prior to this release.
  • Fixed a bug where komi was accidentally initialized to be inverted when generating training data from existing board positions where White moved first rather than Black.
  • Fixed a bug where when hiding the history for the input to the neural net, the historical ladder status of stones would not get hidden, leaking information about past history.
  • Fixes a bug in parsing the komi on certain rules strings (thanks @hzyhhzy)
  • Updated the genconfig command's produced configs to match the new formatting and inline documentation for GTP configs introduced in an earlier release.
  • Minor fixes and features for the tools for generating and handling hint positions for custom training.

OpenCL and TensorRT Bugfixes

22 Jan 14:26
Compare
Choose a tag to compare

This release is not the latest release, see newer release v1.12.4!

New Neural Net Architecture Support (release series v1.12.x)

The same as prior releases in the v1.12.x series, this release KataGo has recently-added support a new neural net architecture! See the release notes for v1.12.0 for details! The new neural net, "b18c384nbt" is also attached in this release for convenience, for general analysis use it should be similar in quality to recent 60-block models, but run significantly faster due to being a smaller net.

What's Changed in v1.12.3

This specific release v.1.12.3 fixes a few additional bugs in KataGo:

  • Fixes performance regression for some GPUs on TensorRT that was introduced along with v.1.12.x (thanks @hyln9 !) (#741)
  • Mitigates a long-standing performance bug on OpenCL, where on GPUs that used dynamic boost or dynamic clock speeds, the GPU tuner would not get accurate timings due to variable GPU clock speed, most notably on a few users machines causing the tuner to fail to select FP16 tensor cores even when the GPU supported them and they would be much better performance. Most users will not see an improvement, but a few may see a large improvement. The fix is to add some additional computation to the GPU during tuning so that it is less likely to reduce its clock speed. (#743)
  • Fixes an issue where depending on settings, in GTP or analysis Katago might fail to treat two consecutive passes as ending the game within its search tree.
  • Fixes an issue in the pytorch training code that prevented models from being easily trained on variable tensor sizes (i.e. max board sizes) in the data.
  • Contribute command in OpenCL will now also pretune for the new b18c384nbt architecture the same way pretunes for all other models.

New Neural Net Architecture! (a few more followup bugfixes)

11 Jan 05:09
Compare
Choose a tag to compare

This release is not the latest release, see newer release v1.12.3 for further bugfixes!

This is a bugfix release following a release of KataGo that supports a new neural net architecture, v1.12.0!
If you want to know more about the improvements and/or other API changes, check the release notes there!*

Users of the TensorRT version upgrading to this version of KataGo will also need to upgrade from TensorRT 8.2 to TensorRT 8.5

If you're a new user, don't forget to check out this section for getting started and basic usage! If you don't know which version to choose (OpenCL, CUDA, TensorRT, Eigen, Eigen AVX2), read this: https://github.com/lightvector/KataGo#opencl-vs-cuda-vs-tensorrt-vs-eigen

Also, KataGo is continuing to improve at https://katagotraining.org/ and if you'd like to donate your spare GPU cycles and support it, it could use your help there!

Changes

In addition to the bugfix to TensorRT computing incorrect values in v1.12.1, this release:

  • Fixes some major issues in OpenCL (not just TensorRT) where the OpenCL tuner may select extremely poorly performing or even outright bad or failing parameters sometimes.
  • Upgrades TensorRT from 8.2 to 8.5 and substantially improves loading and timing-cache initialization times for multi-GPU machines, and removes dependency of TensorRT on CUDNN, also supports newer GPUs. Thanks to @hyln9 for all of this work!
  • Adds some support in config parsing to be able to specify file paths, passwords, or other strings with hash signs or trailing spaces.
  • Adds some better internal tests and error checking for contributing data to the public run.

New Neural Net Architecture! (and bugfix for TensorRT)

08 Jan 16:31
Compare
Choose a tag to compare

Particularly for OpenCL users, see v1.12.2 for a newer release that fixes various performance bugs.

This is a quick followup bugfix for a release of KataGo that supports a new neural net architecture, v1.12.0!
If you're a new user, or want to know more about the improvements and/or other API changes, check the release notes there!

The bug fixed in this release v1.12.1 is a bug for the TensorRT backend. The prior version v1.12.0, using the new net, will compute incorrect evaluations in TensorRT and/or potentially bad moves in some positions.