Skip to content

Releases: google/visqol

v3.3.3

07 Apr 18:05
c3aa2e4
Compare
Choose a tag to compare

This version is a major update from v3.1.0.

  • Introduces a deep lattice network model for speech mode, which improves predictions over the exponential model by using multidimensional partially monotonic mapping of similarity to quality. It is enabled by default (and configurable with –use_lattice_model). There is no lattice model support for audio mode yet, but we will be looking at that for the version.
  • Adds support for python via pybind11 (see the test in the python subdirectory for an example). This is supported on Linux and Mac. (Windows only supports C++, Windows developers are welcome to help).
  • Various bug fixes and code improvements, including DSP fixes, which will slightly change the predictions for all models compared to previous audio and exponential speech models.

v3.1.0

15 Oct 21:07
5d7ae5f
Compare
Choose a tag to compare

This is the first major update since v300.

  • Introduces an exponential fit for speech mode. This means the scores will differ when using speech mode, but not audio mode. As a result, the conformance version was bumped to 310, and we will apply a v3.1.0 tag.
  • Merges internal Status{,Or} changes from @gjasny's PR #23. In many cases I preferred the internal version even though #23 had better formatting, because it will be overwritten by the next internal export due to things like alphabetical ordering or different internal namings.
  • Merges pffft windows and linux repositories