Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistencies between mean and mean! when using 0-Arrays. #906

Open
kellertuer opened this issue Dec 18, 2023 · 0 comments
Open

Inconsistencies between mean and mean! when using 0-Arrays. #906

kellertuer opened this issue Dec 18, 2023 · 0 comments

Comments

@kellertuer
Copy link

I just stumbled upon some inconsistencies between mean and mean! and I am wondering whether this is a bug – since we try to use both those methods in a setting where I would expect them to work consistently.
Here is a small example.

using StatsBase
x = [fill(1.0), fill(2.0), fill(3.0), fill(4.0)]
w = pweights(ones(length(x)) / length(x))
y = mean(x,w)

which returns a

0-dimensional Array{Float64, 0}:
2.5

So consistently I would have expected mean! to do the same if I pass y as the return value, i.e.

mean!(y,x,w)

but it reports

ERROR: ArgumentError: dims argument must be provided
Stacktrace:
 [1] _mean!(R::Array{Float64, 0}, A::Vector{Array{Float64, 0}}, w::ProbabilityWeights{Float64, Float64, Vector{Float64}}, dims::Nothing)
   @ StatsBase ~/.julia/packages/StatsBase/WLz8A/src/weights.jl:657

But even if I provide the dims I get for

mean!(y,x,w; dims=1)
ERROR: MethodError: no method matching +(::Float64, ::Array{Float64, 0})
For element-wise addition, use broadcasting with dot syntax: scalar .+ array

Closest candidates are:
  +(::Any, ::Any, ::Any, ::Any...)
   @ Base operators.jl:578
  +(::T, ::T) where T<:Union{Float16, Float32, Float64}
   @ Base float.jl:408
  +(::Union{Float16, Float32, Float64}, ::BigFloat)
   @ Base mpfr.jl:423
  ...

Stacktrace:
  [1] macro expansion
    @ ~/.julia/packages/StatsBase/WLz8A/src/weights.jl:528 [inlined]
[...]

Ah, I noticed that the same happens, also in the unweighted case, just that it skips the dimension error and directly reports the last.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant