Skip to content

Releases: lightvector/KataGo

Bigger board sizes (just for fun)

16 Apr 01:52
Compare
Choose a tag to compare
Pre-release

This is a just-for-fun side release of KataGo with a simple change to support board sizes up to 29x29, but not otherwise part of the "main" branch of development. This is because supporting larger board sizes currently has a major cost (which is why the normal KataGo releases don't support them by default).

See releases page, for more recent versions and newer and stronger neural nets.

Please mind these notes if using this side release:

  • You should only use this version if specifically wanting to play larger sizes.
    • Memory usage will be significantly increased and possibly performance will be decreased a little in this version,
    • This effect on memory and performance will remain true even if using this version on board sizes 19x19 and smaller!
  • KataGo's neural nets are NOT trained for sizes above 19x19 and might behave poorly in some cases.
    • However, they have had lots of training on different sizes 9x9 through 19x19 and so, even though there is no guarantee, most of the released nets can probably extrapolate somewhat beyond 19x19 and still usually do well.
    • How well different nets extrapolate might vary, there is some chance even that larger nets could do worse than smaller nets at extrapolation. In practice they all mostly seem very, very strong still.
    • The smaller the extrapolation (e.g. 21x21 and 23x23, instead of 27x27 or 29x29) very likely the better and stronger the net will still be.
    • Board evaluation may be a bit more biased or noisier on the larger boards, particularly precise estimation of the score in points, and possibly the net could fail to perceive and understand life/death/capture-races for extremely large dragons.
  • The GTP protocol, which is the universal language that engines like KataGo use to talk to GUIs, does NOT normally support sizes larger than 25x25, so many GUIs might not work.
    • This is due to how the protocol was designed to use alphabetic coordinates, and having no specification for going beyond "Z".
    • KataGo will continue with "AA", "AB", etc (still skipping any use of the letter "I", to be consistent) , but it is quite likely that many GUIs will not have this implemented and will therefore not work with sizes past 25x25.
    • For example, Lizzie doesn't appear to work beyond 25x25 currently.
  • On lower-end GPUs, there may be some chance that the biggest nets fail due to things like running your GPU out of memory due to such a huge board.
    • You might have to switch to a smaller net if you run into such issues.

And of course, there is some chance of a bug that makes KG itself not work on this branch (separately from any mistakes from the net) since large board sizes aren't tested heavily, if so, you can open an issue or otherwise let me know and I'll fix it. For OpenCL, this version might also re-run its tuning upon first startup.

Keeping the above details in mind though, KataGo should be able to provide some strong play on these larger board sizes even with no training for them, so have fun!

New Neural Nets

16 Apr 01:09
Compare
Choose a tag to compare
New Neural Nets Pre-release
Pre-release

This is not intended to be a release of the main KataGo program, but rather just an update to the neural nets available as the run is ongoing. Also, more recent releases can be found here.

More nets!

These are all the new strongest neural net of each size so far. The 30 and 40 block nets perhaps gained perhaps 65 Elo each according to some partially-underway tests, while the 20-block net gained perhaps 55 Elo, since last release. These differences do have some uncertainty though, on the order of +/- 25 Elo (95% confidence), due to measurement noise.

You may notice from the filename that the 20 block net being released here is slightly older than the very latest 20 block net. Preliminary testing showed that this one may have gotten lucky with a random fluctuation, as there sometimes is between net versions, and may be a little stronger actually than the very latest, so I packaged up this one instead.

  • g170-b30c320x2-s2846858752-d829865719 - The latest 30-block net.
  • g170-b40c256x2-s2990766336-d830712531 - The latest 40-block net.
  • g170e-b20c256x2-s3761649408-d809581368 - The latest released 20-block net (continuing extended training on games from the bigger nets).

Enjoy!

Bugfix, Analysis Engine Priority

31 Mar 02:01
Compare
Choose a tag to compare

This is a quick bugfix release following v1.3.4. See releases page for more recent versions, including the most recent neural nets!

Note if upgrading from v1.3.3 or earlier: KataGo OpenCL version on Windows will now keep its tuning data within the directory containing the executable, instead of from where you run it. This might mean that it will need to re-tune once more if you've been running it from a GUI that calls it from elsewhere. If so, then as usual, you can run the benchmark to let it tune as described in: https://github.com/lightvector/KataGo#how-to-use

Changes

  • Fixed a bug in printsgf GTP command that would cause it to fail for some possible rules configurations, and made rules parsing generally a little more lenient for certain aliases for rulesets.
  • JSON analysis engine (../katago analysis) now supports optional priority for queries, documentation here.

New Nets, Negative PDA, Improved defaults

29 Mar 20:21
Compare
Choose a tag to compare

If you're a new user, don't forget to check out this section for getting started and basic usage!

Note if upgrading from v1.3.3 or earlier: KataGo OpenCL version on Windows will now keep its tuning data within the directory containing the executable, instead of from where you run it. This might mean that it will need to re-tune once more if you've been running it from a GUI that calls it from elsewhere. If so, then as usual, you can run the benchmark to let it tune as described in: https://github.com/lightvector/KataGo#how-to-use

New Neural Nets!

Each of these might be around 40 Elo stronger than those in the previous release of neural nets. Training still continues. :)

  • g170-b30c320x2-s2271129088-d716970897 - ("g170 30 block s2.27G") - The latest 30-block net.
  • g170-b40c256x2-s2383550464-d716628997 - ("g170 40 block s2.38G") - The latest 40-block net.
  • g170e-b20c256x2-s3354994176-d716845198 - ("g170e 20 block s3.35G") - The latest 20-block net (continuing extended training on games from the bigger nets).

KataGo Code Changes this Release

UI and Configs

  • Default configs and models!

    • For commands that need a GTP config file, if there is a file called default_gtp.cfg located in the same directory as the executable, KataGo will load that file by default if -config is not specified.
    • For commands that need a neural net model file, if there is a file called default_model.bin.gz or default_model.txt.gz located in the same directory as the executable, KataGo will load that file by default if -model is not specified.
    • So, if these files are provided, then KataGo's main engine can be now invoked purely via ./katago gtp with no further arguments.
  • Adjusted a variety of minor default settings for GTP configs. KataGo should resign slightly more aggressively, have a higher maximum number of moves to display in its PV, use a better number of visits for genconfig, etc. NOTE: some of these may not take effect without switching to the new gtp configs included in the zip files for this release.

Play Improvements

  • "PDA" scaling should be much more reasonable now on small board sizes than it used to be, hopefully improving handicap game strength on small boards.

  • Negative "PDA" is used by default now for games when KataGo receives handicap stones. It can also be set explicitly in the config for testing non-standard unbalanced openings (but unlike handicap games, for arbitrary unbalanced positions it won't be detected by default). This should greatly improve KataGo's strength in handicap games as Black by causing it to play a little more solidly and safely. Thanks to Friday9i and others in the Leela Zero Discord chat for testing this. Note that the implemented scaling is probably still far from optimal - one can probably obtain better results by tuning PDA to a specific opponent + time control.

  • Added option avoidMYTDaggerHack = true that if added/enabled in GTP config, will have KataGo avoid a specific opening joseki that current released networks may play poorly. Against certain bots (such Leela Zero) this also tends to lead to much greater opening variety.

Dev-oriented Improvements

  • Implemented printsgf GTP command.

  • Implement a couple of new GTP extensions that involve a mild hack specifically to help support the Sabaki gui.

  • Removed a lot of old unused or deprecated code, significant internal refactors, added a bit of high-level documentation about what parts of the source code implement what.

  • Removed dependence on dirent.h.

Selfplay-training Changes

  • Added a new option to upweight training on positions with surprising evaluations, and a new option to help increase the proportion of "fairly" initialized games during selfplay.

New Neural Nets

14 Mar 16:37
Compare
Choose a tag to compare
New Neural Nets Pre-release
Pre-release

(This is not intended to be a release of the main KataGo program, but rather just an update to the neural nets available as the run is ongoing. A new software release may be out some time later this month or early next month.)

More nets!

These are all the new strongest neural net of each size so far. The 30 and 40 block nets have gained probably a bit more than 100 Elo since the nets last release and the 20-block extended training net has gained maybe 50 Elo since then.

  • g170-b30c320x2-s1840604672-d633482024 - The latest 30-block net.
  • g170-b40c256x2-s1929311744-d633132024 - The latest 40-block net.
  • g170e-b20c256x2-s2971705856-d633407024 - The latest 20-block net (continuing extended training on games from the bigger nets).

Enjoy!

New Nets, Friendlier Configuration, Faster Model Loading, KGS Support

28 Feb 03:17
Compare
Choose a tag to compare

If you're a new user, don't forget to check out this section for getting started and basic usage!

More Neural Nets!

After an unfortunately-long pause for a large chunk of February in which KataGo was not able to continue training due to hardware/logistical issues, KataGo's run has resumed!

  • g170-b20c256x2-s2107843328-d468617949 ("g170 20 block s2.11G") - This is the final 20-block net that was used in self-play for KataGo's current run, prior to switching to larger nets. It might be very slightly stronger than the 20 block net in the prior release, "g170 20 block s1.91G".

  • g170-b30c320x2-s1287828224-d525929064 ("g170 30 block s1.29G") - Bigger 30-block neural net! This is one of the larger sizes that KataGo is now attempting to train. Per-playout, this net should be noticeably stronger than prior nets, perhaps as much as 140 Elo stronger than "s1.91G". However, at least at low-thousands of playouts it is not as strong yet per-equal-compute-time. But the run is still ongoing. We'll see how things develop in the coming weeks/months!

  • g170-b40c256x2-s1349368064-d524332537 ("g170 40 block s1.35G") - A 40-block neural net, but with fewer channels than the 30-block net! This is the other of the larger sizes that KataGo is now attempting to train. Same thing for this one - should be stronger at equal playouts, but weaker at equal compute for modest amounts of compute.

  • g170e-b20c256x2-s2430231552-d525879064 ("g170e 20 block s2.43G") - We're continuing to extendedly-train the 20-block net on the games generated by the larger nets, even though it is not being used for self-play any more. This net might be somewhere around 70 Elo stronger than "s1.91G" by some rough tests.

Per playout, and in terms of raw judgment, either the 30-block or 40-block net should be the strongest KataGo net so far, but per compute time, the 20-block extended-training "s2.43G" is likely the strongest net. Extensive testing and comparison has not been done yet though.

The latter three nets are attached below. If you want the first one, or for all other currently-released g170 nets, take a look here: https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html

New Model Format

Starting with this release, KataGo is moving to a new model format which is a bit smaller on disk and faster to load, indicated by a new file extension".bin.gz" instead of ".txt.gz". The new format will NOT work with earlier KataGo versions. However, the version 1.3.3 in this release will still be able to load all older models.

If you are using some of the older/smaller nets from this run (for example, the much faster 10 or 15-block extended-training nets) and would like to get ".bin.gz" versions of prior nets, they are also available at: https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html

Other Changes this Release

Configuration and user-friendliness

  • There is a new top-level subcommand that can be used to automatically tune and generate a GTP config, editing the rules, thread settings, and memory usage settings within the config for you, based on your preferences: ./katago genconfig -model <NEURALNET>.gz -output <NAME_OF_NEW_GTP_CONFIG>.cfg. Hopefully this helps newer users, or people trying to set up things on behalf of newer users!

  • All the rules-related options in the GTP config can now be replaced with just a single line rules=chinese or rules=japanese or rules=tromp-taylor or other possible values if desired! As demonstrated in gtp_example.cfg. See the documentation for kata-set-rules here for what rules are possible besides those, and see here for a formal description of KataGo's full ruleset.

  • katago gtp now has a new argument -override-config KEY=VALUE,KEY=VALUE,... that can be used to specify or override arbitrary values in the GTP config on the command line.

  • OpenCL version will now detect CPU-based OpenCL devices, and might run on some pure CPU machines now with no GPU.

GTP extensions

  • KataGo now supports KGS's GTP extension commands kgs-rules and kgs-time_settings. They can be used to set KataGo's rules to the settings necessary for the possible rules that KGS games can be played under, as well as traditional Japanese-style byo-yomi that is very popular on a large number of online servers. See here for some documentation on KataGo's implementation of these commands.

  • Added kata-raw-nn GTP extension to dump raw evaluations of KataGo's neural net, documentation in the usual place.

Misc

  • Added a mild hack to fix some instability in some neural nets involving passing near the very end if the game that could cause the reported value to erroneously fluctuate by a percent or two.

  • For those who run self-play training, a new first argument is required for shuffle_and_export_loop.sh and/or export_model_for_selfplay.sh - you should provide a globally unique prefix to distinguish your models in any given run from any other run, including ideally those of other users. This prefix gets displayed in logs, so that if you share your models with others, users can know which model is from where.

  • Various other minor changes and cleanups.

Edit (2020-02-28) - fixed a bug where if for some reason you tried to ungzip the .bin.gz model files instead of loading them directly, the raw .bin could not be loaded. Bumped the release tag and updated the executables.

OpenCL Major Speedup, Defaults, GTP and Other Fixes

02 Feb 05:28
Compare
Choose a tag to compare

This release should be a significant OpenCL performance improvement for users without NVIDIA tensor core GPUs - namely anything less top-end than an RTX 20xx card or similar. For NVIDIA tensor-core-supporting GPUs, the CUDA version is likely to still be faster though. Also, many other fixes and a few missing features have been added.

NOTE: The new OpenCL implementation will need to re-tune itself again the first time you start this new version, so be patient on the first new startup and/or run it in the console the first time.

If you're a new user, don't forget to check out this section for getting started and basic usage.

New Neural Nets! Yay!

  • g170-b20c256x2-s1913382912-d435450331 ("g170 20 block s1.91G") - A new 20 block net that is yet another 115 Elo (+/- 30) stronger than the previous net. This should be the new strongest KataGo net!

  • g170e-b15c192-s1672170752-d466197061 ("g170e 15 block s1.67G") - This 15 block net is probably the last extended-training 15 block net that KataGo will be producing. It is probably about 20-50 Elo stronger than the previous one, which might put it about on par with ELF OpenGo v2 at equal playouts, for high hundreds or low thousands of playouts. Making it a very strong net, given that it is only 15 x 192 in size, and hopefully ideal for weaker to moderate-level hardware.

These are attached below. For all other currently-released g170 nets, they are here: https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html

Notable Changes in This Release

  • Much improved xgemm implementation for OpenCL version - overall OpenCL performance should be improved by 10%-50%, depending on your hardware and threads!

  • All options in the GPU-related sections of the GTP config are also now optional and have better defaults. KataGo will automatically choose a batch size, and on the CUDA version it will automatically detect what flavor of GPU you have and enable or disable FP16 accordingly. Multiple GPUs will not be used automatically however - if you want to let KataGo use a larger cache to be a little faster or to have it use multiple GPUs, or run into problems with the automatic FP16 choice - you can still override the defaults.

  • Benchmark's thread suggestion greatly improved (./katago benchmark -config GTP_CONFIG.cfg -model MODEL.txt.gz), based on some new test data. The old version was a too conservative particularly on very strong machines - the new one will be a bit more aggressive about recommending larger numbers of threads.

  • GTP commands final_status_list and final_score will now use KataGo's neural net to guess an evaluation of the position if invoked when the game is not over or not fully cleaned up. For Japanese rules games, such as if you're running it on KGS - this should make KataGo now able to score and mark dead stones in all common cases (I think)! The heuristics here may still be a bit rough however and could possibly behave weirdly in certain sekis, or there may be more basic issues since I haven't specifically gotten set up to test on KGS, so let me know if you run into issues.

  • GTP command fixed_handicap is now supported in KataGo.

  • A few new options for users running selfplay training - can now terminate train.py after a fixed number of epochs, can now terminate gatekeeper once it's done passing a net, can now disable autoreject of old nets.

EDIT: Reverted the automatic use of FP16 on the CUDA version for Pascal-architecture NVIDIA GPUs, when cudaUseFP16=auto. FP16 is a mild performance boost for many of these GPUs, but on some setups there might be a chance of just a precision loss for little gain, and also maybe could cause issues for some users. If you have a recentish but non-tensor-core GPU, you can try setting cudaUseFP16=true instead of cudaUseFP16=auto in your gtp config and benchmark it.
EDIT: And fixed some additional bugs in the GTP protocol regarding races between pondering and other commands, and some long-standing issues with handicap stone handling. Additionally, KataGo will now tolerate handicap being placed by alternating black moves and white passes at the start of a game.

Some More Neural Nets

25 Jan 01:20
Compare
Choose a tag to compare
Some More Neural Nets Pre-release
Pre-release

This is an upload of some stronger neural nets for KataGo... but they are obsoleted by v1.3.2
For the latest released version of the code or engine and stronger neural nets, see: https://github.com/lightvector/KataGo/releases/tag/v1.3.2

Available are (click "Assets" below):

  • g170-b20c256x2-s1420141824-d350969033.txt.gz ("g170 20 block s1.42G") - A new 20 block net that is possibly another 100 Elo (+/- 30) stronger than the net released with v1.3.1. This should be the strongest released KataGo net so far!

  • g170e-b15c192-s1305382144-d335919935.txt.gz ("g170e 15 block s1.31G") - A very strong extended-training 15-block net, learning from games played by later 20-block nets. This net is perhaps 350 Elo stronger than the selfplay 15-block net bundled with the v1.3 release, which might put it at almost as strong as LZ-ELF OpenGo v2 at equal visits or playouts for small numbers (e.g. around 1000), despite only being 15x192 rather than 20x256. If you're on weaker hardware and prefer a somewhat faster search speed, this might be a nice net to use. Like all v1.3+ nets, it supports all the usual features - board sizes, komi, JP rules, handicap play, etc.

This "release" may be modified directly as the run progresses and more neural nets are available. If you would like to see the full set of publicly released KataGo nets or data so far, see https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html or more broadly https://d3dndmfyhecmj0.cloudfront.net/.

Enjoy!

Better Parameters and Defaults, Bugfixes, Minor GTP stuff

18 Jan 06:45
Compare
Choose a tag to compare

For stronger neural nets and newer code, see this later release!

This is a quick followup release to the major changes in v1.3. It fixes some bugs and and improves a few things.

If you are upgrading from a much older version of KataGo, you probably want to skim over the release notes for v1.3 as well!

Changes

  • You can now delete or comment out the entire bottom half of your GTP config! You can remove everything below Root move selection and biases including all the Internal params. All of these parameters will now use good defaults if not specified. Deleting or commenting them out is recommended so that you pick up any future improvements to these parameters automatically, rather than manually having to update your config. (Except for any specific values that you are deliberately adjusting or experimenting with, of course).

  • Along with the above note, cpuctExplorationLog has been adjusted to 0.4. Based on a few thousand test games, the old value of 0.6 was likely a little large, the newer value should be slightly stronger at a variety of numbers of playouts. If you are using a v1.3 or older GTP config that hardcodes 0.6 or another value, this change will get picked up automatically if you delete the bottom half of the config as recommended above, or you can manually adjust this value yourself, otherwise you will NOT get this change.

  • Implements lz-genmove_analyze and kata-genmove_analyze GTP extensions.

  • Improves FP16 performance on Pascal-architecture NVIDIA GPUs (hopefully).

  • Fixes a bug in lead estimation training that could cause KataGo to sometimes greatly underestimate the lead if the lead is more extreme than about 70 points, and sometimes cap out around there (such as still reporting 70 when actually the true lead is 100+ points). Earlier g170 neural nets may continue to exhibit this bug in some cases (possibly not consistently), but ongoing nets such as the newer one attached here should be good.

  • Some other very minor fixes and improvements.

New Net

For stronger neural nets than this, see this later release!
Attached here is a newer net from the ongoing run ("g170 20 block s1.04G"). It should be about 120 Elo stronger than the strongest previous net. This net will also work with the prior v1.3 release - the changes in this release are independent changes with no effects on compatibility of KataGo nets.

See the v1.3 release for some more neural nets, or here for all nets from this run released so far.

Have fun!

New Run, JP Rules, Handicap play, Lead Estimation...

13 Jan 03:54
Compare
Choose a tag to compare

Note: newer version v1.3.2 is out, improving and fixing many things on top of this release!

For stronger neural nets, see also the same later release!

New Run!

KataGo is on a fresh new major run ("g170"). Due to some more improvements, it is training faster and better. Fairly recently it caught up to the strength of the run from June, so it is not actually much stronger yet, but it seems like catching up is a good time for a release, because there are a lot of new features. The main page notes have also been updated.

  • Japanese-like rules, stone-counting rules and many other combinations of rules. These can be can be configured in the gtp config as usual, and for developers, they can also be set dynamically using a few new GTP extensions. Note regarding Japanese-like rules: KataGo will usually fill the dame, but might not do so 100% of the time, since under its ruleset this is not strictly required - but for game analysis purposes, it should not matter much. There's some chance it also occasionally gives up a point by playing an unnecessary defense, if it is winning by enough and wants to be safe. Lastly, some kinds of seki positions or double-ko may still confuse it a little - getting all the details right is quite hard! But hopefully in the common case KataGo performs well.
  • Handicap game "PDA" - ("playoutDoublingAdvantage") - a new option to configure KataGo to play much more aggressively (or even overplay a little) in handicap games, for hopefully much stronger handicap play. This is the result of some special training using unbalanced playouts, so that KataGo learns how to fight weaker versions of itself (while the weaker versions simultaneously learn how to play safely and resist).
  • Much better score estimation - KataGo used to report an estimate of the selfplay average score - the average score that it would achieve if it self-played from a position. So for example, if black were to get an extra stone for free then older KataGo nets might report for example +20, even though the stone should be worth only fair_komi * 2 ~= 14 points. Why? Because KataGo correctly predicted that in self-play, the side that was behind might give up 6 more points on average over the entire rest of the game by taking additional risks to try to swing the game back. These new neural nets are also now trained to estimate the lead - the number of points needed to make the game fair, in this case 14. By default now, KataGo with the latest neural nets will show the estimated lead rather than the estimated selfplay average score - which should be much less confusing.
  • Log-scaling cPUCT - KataGo used to explore too little at large numbers of visits. This change should make KataGo quite a bit stronger at high visits. The differences mostly start to kick in around 3000+ playouts, or 6000+ visits. At around 20000 visits this change maybe might start to be worth close to +100 Elo, including even for older nets. Note: old GTP configs will NOT get this by default, see notes about upgrading below.

Some new things for developers:

  • Analysis engine - ./katago analysis -model <NEURALNET>.txt.gz -config <ANALYSIS_CONFIG>.cfg -analysis-threads N - a new batched analysis engine that uses a JSON protocol that might be much more efficient for many use cases than GTP. Documentation here. (NOTE: if you were already using this engine pre-official-release, the API has changed a little in a non-compatible way, please check the docs for the protocol)

  • Tensorflow 1.5 and Multi-GPU training - KataGo has upgraded the training scripts to use TF 1.5 and support multi-GPU now. Specify something like -multi-gpus 0,1 to train.sh or to train.py. There are also some minor changes to one or two of the training-related scripts - see the main page readme for details.

  • Other Training Changes - By default, KataGo does not use batch norm any more, but you can turn it back on if you like (in python/modelconfigs.py). There are also many changes to training targets, the training data format, and other details that probably break compatibility. So if you already had a training run going, unfortunately you will NOT be able to directly continue training with this new KataGo version, due to all these changes. However, any nets you train on the older version will still work with v1.3. Therefore it should still be possible to bootstrap a new run if you want, by using the older version nets to selfplay enough games with v1.3, and then training an entirely fresh new neural net on that v1.3 selfplay data with the new targets and upgraded TF 1.5 code.

Upgrading

Some notes about upgrading to this v1.3:

  • Old GTP configs should still work. However, there have been quite a few new parameter additions and improvements to the default parameters, and some features like the log-scaling cPUCT require these changes. So it's recommended that you start fresh with the new provided gtp_example.cfg (included with the executables below, or available here, and simply copy over the changes you had to numSearchThreads, cudaUseFP16, and other settings.

  • The benchmark command may be helpful in testing settings: ./katago benchmark -model <NEURALNET>.txt.gz -config <GTP_CONFIG>.cfg. And if you are changing settings or seemingly running into any problems, it's highly recommended you run this command directly in a terminal/console/command-line window first, rather than running KataGo with a GUI program, to test and see what's going on.

  • If you are using the OpenCL version with a new neural net, it will need to re-tune the first new time you run it - which may take a while. In a GUI like Lizzie or Sabaki it may look like a hang - if you want to see the progress of tuning, try running the benchmark command above directly (which will tune first if needed).

New Neural Nets

Two neural nets are attached!

  • g170-b20c256x2-s668214784-d222255714.txt.gz ("g170 20 block s668M") - Supports all the features above, and should be mildly stronger than g104's old 20 block net (unless my tests got very unlucky with measurement noise), making this the strongest KataGo net now, by a little bit.
  • g170e-b10c128-s1141046784-d204142634.txt.gz ("g170e 10 block s1.14G") - A very strong 10 block net. It should be almost equal in strength to g104's old 15 block net, despite being a much smaller size. Since it is smaller, it will run waaay faster, particularly on weaker hardware.

These two above are probably the main ones you might want, but see here for a few additional neural nets if you like. More and stronger nets may be uploaded in the future some time after this run progresses further.

Please feel free to create a Github issue, or to report problems or ask for help in the LZ Discord if you encounter any issues with the attached executables. If problems are found, I'll try to fix them and recompile and bump the tag of this release over the next few days.