Skip to content

Releases: lightvector/KataGo

Igo Hatsuyoron 120 Special Net

06 Dec 15:07
Compare
Choose a tag to compare
Pre-release

This is not exactly a release of KataGo itself, but rather an upload of a specially trained neural net that should understand and be able to provide strong analysis for Igo Hatsuyoron 120. This is the same neural net that was behind the analysis here: https://lifein19x19.com/viewtopic.php?f=18&t=16995

You should be able to run it yourself with existing versions of KataGo (https://github.com/lightvector/KataGo/releases/tag/v1.2), by swapping it out for the usual KataGo neural net file.
Note: if you use it, don't trust KataGo's score prediction, on this problem it will not be good. Instead, adjust komi and watch the winrate.

It should also still play normal games at a superhuman level too, although I have not directly tested its normal-game strength. Due to having focused training on this one tsumego, it is probably weaker than the standard 20-block net (here) that has been out for a while. But if somehow turns out to be stronger, that wouldn't entirely surprise me - feel free to try it!

Enjoy!

OpenCL, Windows Support, other features and fixes

20 Jul 18:59
Compare
Choose a tag to compare

As of this version, OpenCL is now working! Compiling for Windows on MSVC is now supported, with instructions in the main README, and attached to this release are pre-compiled Windows binaries.

If you have questions or run into issues, feel free to open an issue on this GitHub, or for possibly more interactive feedback, you can check out the Leela Zero Discord (which is also home to a fair amount of KataGo-related discussion as well).

Notes about OpenCL vs CUDA:

  • Upon the first startup, the OpenCL version should do a bunch of tuning to optimize parameters to your GPU. This might take a while, and is completely normal.
  • You should probably take a look at the notes at the top of gtp_example.cfg and play around with some of the parameters for best performance. If you have a strong GPU, it is quite possible that you could get a factor of 2x-4x performance gain by adjusting the number of threads or other parameters to the optimal values for your system. Unlike the OpenCL-specific GPU parameters, these values are not automatically tuned - future versions of KataGo will probably include better support or automation for tuning these values as well though.
  • If you have more than one OpenCL device (e.g. some Intel CPU and also a proper GPU) and it's choosing the wrong one, edit openclDeviceToUse in gtp_example.cfg
  • The OpenCL version has in all likelihood NOT been optimized quite as heavily as LZ has, you should not expect it to be nearly as fast since it's new and also KataGo is the first time I (lightvector) have ever written GPU code in any form and am not particularly expert yet. :)
  • If you have an NVIDIA GPU, the CUDA version could be faster (or not! it varies). For the CUDA version, you will need to install CUDA 10.1 on your own, as well as CUDNN 7.6.1 (https://developer.nvidia.com/cuda-toolkit) (https://developer.nvidia.com/cudnn). When installing CUDA, you will also need to restart your computer after installing for the changes to pick up properly (CUDA does not tell you this).

Enjoy!

Where to get the neural nets

For pre-trained neural net models, see the prior release.

Changes

Aside from OpenCL and Windows support, minor changes include:

  • Use smaller NN buffer for boards smaller than 19x19, performance improvement
  • Fixed bug that was preventing ".gz" files from loading properly in Windows.
  • analysisPVLen can now configure the maximum length of analysis variations
  • dynamicScoreZeroWeight can experimentally configure KataGo's score utility, intended for handicap games.
  • Resignation is enabled by default, minor changes to example GTP config params.
  • KataGo now by default builds as katago or katago.exe instead of main.
  • KataGo now supports loadsgf GTP command.
  • GTP extension commands are now documented
  • A wide variety of bugfixes and internal cleanups.

Update (2019-07-19) - a few more changes

  • Added another change where unless explicitly configured otherwise (in gtp.cfg), KataGo will share OpenCL tuning across all board sizes, since tuning can be a bit slow to do on every board size.
  • Added precompiled Linux binaries. These binaries are mostly dynamically linked, so you will still have to ensure you have the appropriate shared libraries (and CUDA and CUDNN for the CUDA version).

Update (2019-07-20) - out of beta now! A few more changes:

  • Added some logic to select a reasonable device for OpenCL by default.
  • Fixed bug where evaluation sign was flipped in log files.
  • Fixed bug where trying to make a move past the end of the game would return an illegal move.
  • Fixed some handling of how handicap games were treated with respect to komi updates and board clearing.

OpenCL, Windows Support, other features and fixes (beta)

18 Jul 06:35
Compare
Choose a tag to compare

See updated release at https://github.com/lightvector/KataGo/releases/tag/v1.2

This was a beta version for testing first, it has since been superceeded (and also I screwed up some editing on Github, so I lost the beta version of the release message and files here).

Strong Neural Net, LCB, and many bugfixes

18 Jun 04:05
Compare
Choose a tag to compare

KataGo has a new run ("g104") that has reached as strong as or slightly stronger than LZ-ELFv2! As a result of some improved hyperparameters and other training improvements, starting from scratch this run surpassed the old 1-week run in the previous release in only 3.5 days, and reached perhaps slightly stronger than ELF and near LZ200 after a total 19 days on an average of less than 27 Nvidia V100 GPUs.

g104 models are trained on 9x9 to 19x19 square boards. They will probably generalize somewhat to other sizes, including rectangular boards, but are not guaranteed to play well for boards outside that size range. They are trained on area scoring, positional and situational superko, and suicide on or off. They will behave erratically for other scoring rules or other ko rules.

For match games, this release also features an implementation of LCB (leela-zero/leela-zero#2282) giving a noticeable boost in match strength, as well as a variety of bugfixes and improvements. Self-play training data written by this release is not compatible with training data produced using the prior release, due to a slight format change in the .npz data files.

For the full history of models and training data (rather than only the strongest of each size), see here. If you are curious to look at SGFs of the self-play games, they are included as well.

Also attached for convenience is a precompiled Linux executable, which is the main OS I've tested on, still requiring CUDA 10 and CUDNN to be installed, for now. Compiling yourself may actually be a bit more flexible though - I've actually gotten a working compile on a system with CUDA 9 for example. Support for other OSes, including Windows, and elimination of dependence on CUDA, is being worked on and is not ready yet, but should not be too much longer.

Enjoy!

NOTE: If you're on windows, and you want to try a pre-compiled version that another user has managed to get working there, you can check out #2. Among other things though, you will need to explicitly unzip the neural net ".txt.gz" file to get the neural net to load properly in that version, as well as to ensure you have up-to-date version of your GPU drivers and the right version of cudnn64_7.dll. More-official windows support still in progress.

(edit: 2019-06-18, 2019-06-19 - bumped the tag for this release with some GTP bugfixes/improvements)

Initial release

27 Feb 00:22
Compare
Choose a tag to compare

Initial release of KataGo as of the completion of its initial main run ("g65") up to roughly the strength of LZ130 in a week on a few dozen GPUs.

Included is the strongest neural net of each size from the run, see README_models.txt for details. These neural nets should be ready to use with the code once compiled. Note that although the KataGo code and the self-play training theoretically support territory scoring (i.e. Japanese-style scoring) and simple ko rules (as opposed to just superko), these neural nets were not trained with such rules, and will likely only work properly with area scoring and superko.

For the full history of models and training data (rather than only the strongest of each size), see here.

(edit April 30: Adjusted directory structure of zipped models slightly to be a bit friendlier to latest python code)