You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Accuracy of dot product depends on the size of the components// of the vector.// Imagine that each `x_i` can vary by `є * |x_i|`. Similarly for `y_i`.// (Basically, it's accurate to ±(1 + є) * |x_i|).// Error for `sum(x, y)` is `є_x + є_y`. Error for multiple is `є_x * x + є_y * y`.// See: https://www.geol.lsu.edu/jlorenzo/geophysics/uncertainties/Uncertaintiespart2.html// The multiplication of `x_i` and `y_i` can vary by `(є * |x_i|) * |y_i| + (є * |y_i|) * |x_i|`.// This simplifies to `2 * є * (|x_i| + |y_i|)`.// So the error for the sum of all the multiplications is `є * sum(|x_i| + |y_i|)`.fnmax_error<T:Float + AsPrimitive<f64>>(x:&[f64],y:&[f64]) -> f32{let dot = x
.iter().cloned().zip(y.iter().cloned()).map(|(x, y)| x.abs()* y.abs()).sum::<f64>();(2.0*T::epsilon().as_()* dot)asf32}
however, in IEEE754, the variance in float point number representation is not a constant, and it is also not linear to the number value, for values close to 1, the variance is small (i.e., they have high precision), for very large values, the difference between consecutive representable floating-point numbers can be quite large (i.e., the precision is lower), the reason is that the variance in fraction part will be amplified by the exponent part, and larger value has larger exponent.
so T::epsilon().as() * dot might not be appropriate here.
// The multiplication ofx_iandy_ican vary by(є * |x_i|) * |y_i| + (є * |y_i|) * |x_i|.
this may also have implications, as the є * є may accumulate in large vector dimensions
https://github.com/lancedb/lance/actions/runs/8794212752/job/24133303595
The text was updated successfully, but these errors were encountered: