Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] add spatial correlogram function #259

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

knaaptime
Copy link
Member

@knaaptime knaaptime commented Aug 4, 2023

add first draft of correlogram

@knaaptime
Copy link
Member Author

supercedes #253

@knaaptime
Copy link
Member Author

there's nothing harder than pulling off a successful rebase. I'll die on this hill.

@jGaboardi
Copy link
Member

there's nothing harder than pulling off a successful rebase. I'll die on this hill.
5589648

@knaaptime knaaptime changed the title corrfix add spatial correlogram function Aug 4, 2023
which concept of distance to increment. Options are {`band`, `knn`}.
by default 'band' (for `libpysal.weights.DistanceBand` weights)
statistic : str, by default 'I'
which spatial autocorrelation statistic to compute. Options in {`I`, `G`, `C`}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you add a key to what I, G, C mean? Saying that it is Moran's I, Geary C, Getis-Ord G?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice suggestion.

raise e("distance_type must be either `band` or `knn` ")

# should be able to build the tree once and reuse it?
# but in practice, im not seeing any real difference from starting a new W from scratch each time
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not even for large data? I suppose that repeated creation of the tree could make a dent on this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused by @knaaptime's comment, and also this response. This should not be building a KDTree repeatedly, but it is instantiating a new W every time. The tree-rebuilding is avoided because the Distance classes (KNN, Kernel, and DistanceBand) all correctly take pre-built KDTrees as input. Build a tree once & it can get reused.

However, the corresponding query(), query_ball_point(), or sparse_distance_matrix() calls are repeated, which could be more efficient if we computed it once for the maximum distance/k value we need.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was "testing" on a dataset with ~65k points, but just not terribly thoroughly

what i meant was, I'd also done KNN.from_dataframe(df, k=k) in each loop, instead of W(tree), where the former needs to build the tree each time in addition to querying it. Then i figured it might be faster to re-use the same tree repeatedly instead

what i was hoping for was something like @ljwolf said, where i'd do the expensive call once for the largest distance, then repeatedly use the same structure for repeated calls (sorta like how pandana works with preprocess). If we just loop over k sorted in descending order, does that help?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(a) I think it's fine to keep this current implementation for now.
(b) yeah basically: a more performant implementation would probably use max(distances) to compute .sparse_distance_matrix() (or built a sparse matrix from the output of query() if KNN) one time, and then progressively mask that based on each smaller distance value... basically, we want to query once, rather than repeat the query to yield results we can "precompute".

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha. Same way i did it for the isochrones with pandana. Get the adjacency once and trim it down for each distance. So here it would be something like

maxtree = KDTree(max(distances))
mintree = KDTree(min(distances))

distmat = maxtree.sparse_distance_matrix(mintree)
for dist in distances
	w = Graph.build_knn(distmat, k=k)

or something?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i guess what im asking is, what's the most efficient way to do this with the new Graph?

If we have a Graph based on the max distance, then it's already got all the information we need for every lesser distance, and all we need to do is filter on the adjlist where wij<distance. Presumably thats what simething like Graph.build_knn(distmat, k=k) would do when passed an adjlist or a sparse matrix because all it needs to do is subset the existing data, not rebuild the distance relationships

for this problem its easy enough to just build the adjlist once and literally filter it down each distance and instantiate a new Graph/W from that adjlist, but is that the best way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this problem its easy enough to just build the adjlist once and literally filter it down each distance and instantiate a new Graph/W from that adjlist, but is that the best way?

I suppose this would need to be tested. My sense is that a filter of adjacency based on a distance will be significantly slower than a tree query but I might be wrong.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, i'm playing with this, and it would be straightforward to filter the adjlist, which is fast, since its just adj[adj['weight']<=distance].

...But once we subset the W, now we have to subset the original dataframe to align it, which is something we said we wouldnt do. Is there a good way to do this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To keep the structure, you can just assign weight=0 to everything above the set distance threshold.


distances = [i + 500 for i in range(0, 2000, 500)]

def test_correlogram():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have some ecosystem-wide standards on what is the minimal acceptable test suite?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No... We've talked about separating "correctness" tests (does the statistic recover the same result every time?) and option tests/user tests (does the function handle all combination of arguments/input types correctly?), but we've never agreed on what is necessary for any contribution...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should put this on the agenda for the monthly call and/or steering committee.

tbh i dont think an ecosystem-wide policy is realistic. We don't have the resources. There's a lot of heterogeneity in the maturity of different subpackages, so a lot of variation in how the lead maintainers want to manage their stack. Most often, I'm happy to accept new functionality once the test shows that it gets the 'right answer', even if i dont have the capacity to write a fully-parameterized test suite that hits every line--so that's how i manage tobler, segregation, geosnap, etc, otherwise they won't move forward. I know which lines are untested, so I'm ok with that until (a) an issue is raised, (b) i get some time to focus on testing, or (c) the total coverage drops too low (which sometimes triggers b)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, makes sense. I asked because I am not sure when should I be fine with the state of tests when doing the review, so an agreed guideline would be nice.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very much agree with Martin on this. Seems like our general rule of thumb has been that somewhere between 70-90% is the acceptable coverage. I always try to get as close to 100% as possible to (attempt to) hit any weird edge cases, but I know that isn't required a lot of the time. If we want to decide on an actual number, this is surely something for at least the steering committee to vote on. Moreover, (1) as a step towards catching edge cases we may want to consider seeing about implementing hypothesis, and (2) we should improve testing across all submodules to properly test minimum requirements, as Martin points out here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to decide on an actual number, this is surely something for at least the steering committee to vote on. Moreover, (1) as a step towards catching edge cases we may want to consider seeing about implementing hypothesis, and (2) we should improve testing across all submodules to properly test minimum requirements, #258.

again, i dont think an ecosystem-wide policy is realistic, but definitely something we should discuss. The question is about cost-benefit. If we're too strict about test coverage, it will discourage contributions, and i think for many packages, that's a greater risk than having some untested code. I don't have time to get absorbed in edge-cases, so I'd prefer to let those issues surface before letting them block new functionality getting merged.

case in point, please feel free to write out the rest of the test suite here :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to decide on an actual number, this is surely something for at least the steering committee to vote on. Moreover, (1) as a step towards catching edge cases we may want to consider seeing about implementing hypothesis, and (2) we should improve testing across all submodules to properly test minimum requirements, #258.

again, i dont think an ecosystem-wide policy is realistic, but definitely something we should discuss. The question is about cost-benefit. If we're too strict about test coverage, it will discourage contributions, and i think for many packages, that's a greater risk than having some untested code. I don't have time to get absorbed in edge-cases, so I'd prefer to let those issues surface before letting them block new functionality getting merged.

case in point, please feel free to write out the rest of the test suite here :)

FWIW @knaaptime I am "on your side" here (though I think we're all on the same side). In my opinion, getting new functionality merged should take precedence over a strict coverage number for individual PRs, so long as that new functionality is being "mostly" covered (where "mostly" can either be more lines covered or majority cases covered). More testing can be added later as time & energy permit. I hope my comments did not come off as combative.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅 sorry my style is always blunt! no offense taken or intended. We're all on the same page. We need some guidelines on merging--definitely. I just want to avoid a situation where we let stuff sit here, because it's usually easier for us core maintainers to take incremental passes at improving test coverage, instead of imposing a big burden on the PR opener

(for this one, I certainly don't need this PR merged, but while it's sitting here it makes for a good example. I already have the function myself, but if other folks want to use it, i'd prefer not to let corner cases prevent that, since i know i don't have a ton of time to write out the test suite. I can usually commit to responding to bug reports though)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question was primarily coming from my GeoPandas experience and understanding that what we aim for there is not applicable in PySAL. So I was just wondering what is the level people feel is enough. Totally agree with all you said above, there's no need to block stuff just because tests are not 100%. Happy to have a further chat about it during the next call. And happy to merge this with the test suite as is :).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once again I agree with Martin, but maybe after addressing #259 (comment)?

STATISTIC = G
elif statistic == "C":
STATISTIC = Geary
else:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extraction functions look sufficiently general to allow us to admit any callable like stat(x, w, **kwargs) that returns an object with attributes? There's nothing I/G/C specific in how the attributes are extracted.

I'd love to allow users to send a callable, since this gives us pretty major extensibility for free...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, that makes sense. I think the first version I wrote actually took the Moran class as the first argument, but i thought the dispatching actually helped clarify what the function was doing. The internals ended up a lot more generic than i figured they'd need to be

so the most abstract version returns the output of a callable for a range of W/Graph specifications (in table form), and there are probably plenty of uses for that. But then do we want to call the function something more generic since it no longer necessarily computes some measure of correlation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally, and then define a specific correlogram() function that only supports Moran, spatial Pearson, or spatial tau?

For the general function, @TaylorOshan and I have been calling these "profile" functions (bottom of page 302), so I'm partial to distance_profile() or spatial_profile().

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I guess even like.... I have a nascent implementation of a nonparametric robust Moran/Local Moran statistic that would benefit from being able to plug in directly? I think it's OK to call the function correlogram(), and then allow for a user-defined callable. The fact that it's user-defined means that we can't enforce the output is callable, and this keeps things more simple for the user.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

profile is the same nomenclature over in segregation but maybe we should centralize some of this logic then

@knaaptime knaaptime changed the title add spatial correlogram function [WIP] add spatial correlogram function Aug 5, 2023

return (
pd.DataFrame(outputs)
.select_dtypes(["number"])
Copy link
Member Author

@knaaptime knaaptime Aug 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, in the abstract version, this function becomes spatial_profile(callable, y, gdf, **kwargs), where callable is any function that takes (y,w) as arguments.

in that case, should we get rid of this .select_dtypes here, and return everything the callable has? Also, since this line requires a class, not a generic callable, does that need to be abstracted a bit?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think just keep it as correlogram. Allowing a user defined function already means we can't assume/ensure the output is "correlation", but it's a relatively advanced way to work with the function.

And yes, the output type of the callable has to be a namespace/namedtuple/object kind of thing, but I think that's an ok requirement

@knaaptime
Copy link
Member Author

this should be just about good to go, but it's still failing for stuff like join counts because the class stashes all sorts of things as attriubutes (like the W and its adjacency list) that cant get serialized by joblib, and LOSH/Spatial_Pearson which have a different signature that needs w.sparse

@knaaptime
Copy link
Member Author

knaaptime commented Aug 6, 2023 via email

@codecov
Copy link

codecov bot commented Oct 26, 2023

Codecov Report

Merging #259 (c643524) into main (f6a8732) will decrease coverage by 0.2%.
The diff coverage is 55.0%.

Impacted file tree graph

@@           Coverage Diff           @@
##            main    #259     +/-   ##
=======================================
- Coverage   73.4%   73.2%   -0.2%     
=======================================
  Files         24      25      +1     
  Lines       3285    3325     +40     
  Branches     518     527      +9     
=======================================
+ Hits        2411    2433     +22     
- Misses       708     721     +13     
- Partials     166     171      +5     
Files Coverage Δ
esda/__init__.py 100.0% <100.0%> (ø)
esda/correlogram.py 53.8% <53.8%> (ø)

n_jobs : int
number of jobs to pass to joblib. If -1 (default), all cores will be used


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

W = KNN
else:
with NotImplementedError as e:
raise e("distance_type must be either `band` or `knn` ")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
raise e("distance_type must be either `band` or `knn` ")
raise e("distance_type must be either `band` or `knn`")

@knaaptime
Copy link
Member Author

what other tests do folks think are necessary for this?

@ljwolf
Copy link
Member

ljwolf commented Jan 26, 2024

I think at least:

  1. verify that the documented association measures work as expected
  2. verify that a simple callable completes its computation

Then I think it's done!

@knaaptime
Copy link
Member Author

cool. I think 1 is covered by the existing test using I? just need to add a generic callable test for 2?

@ljwolf
Copy link
Member

ljwolf commented Jan 26, 2024

covered by the test using I

Ah ok, in light of the previous discussion, fine! I meant that we should test all of the estimators we specify to ensure correctness.

Yes on the generic callable test!

Then, I think the last bit is just the test failures?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants