Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add Top-K Nearest Neighbors for Normalized Matrix Profile #592 #595

Merged
merged 453 commits into from
Nov 9, 2022

Conversation

NimaSarajpoor
Copy link
Collaborator

@NimaSarajpoor NimaSarajpoor commented Apr 28, 2022

This PR addresses issue #592. In this PR, we want to extend the function stump and the related ones so that it returns Top-K Nearest Neighbors Matrix Profile (i.e. the k smallest values for each distance profile and their corresponding indices).

What I have done so far:

  • Wrote a naïve implementation of stump_topk
  • Provided a new unit test for function stump

Notes:
(1) np.searchsort() is used in the naive.stump_topk. Another naive approach can be traversing distance matrix row-by-row, and using np.argpartition() followed by `np.argsort()'.

(2) I think considering the first k columns in the output of stump_topk is cleaner, and also easier for the user to get access to those top-k values.

I can/will think about a clean way to change the structure of output just before returning it so that the first four columns becomes the same for all k. However, I think that makes it hard to use the output later in other modules.


Add To-Do List

  • Add parameter k to stump
  • Add parameter k to stumped
  • Add parameter k to gpu_stump
  • Add parameter k to scrump
  • Add parameter k to stumpi
  • Run published tutorials to see if they still work with no problem

Side notes:

  • Add parameter k to non-normalized versions.
  • Check docstrings for functions that require matrix profile P as input. For instance, in stumpy.motifs, we may say something like... "P is (top-1) 1D array matrix profile"

tests/naive.py Outdated Show resolved Hide resolved
tests/test_stump_topk.py Outdated Show resolved Hide resolved
@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
I addressed the comments and changed the files accordingly.

@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
Also, I did an investigation and, as you said, it is possible that we have different indices when there is a tie. Now, the question is: "Should we really care about it?" Because, at the end of the day, what matters is the values of matrix profile P. Right?

For unit test, we can make sure the P values corresponding to those indices are close to each other (e.g. less than 1e-6). What do you think?

@seanlaw
Copy link
Contributor

seanlaw commented Apr 30, 2022

Also, I did an investigation and, as you said, it is possible that we have different indices when there is a tie. Now, the question is: "Should we really care about it?" Because, at the end of the day, what matters is the values of matrix profile P. Right?

If possible, we should try to account for the indices in the naive implementation. For naive.stump, I wonder if we can use np.lexsort to sort on multiple columns after we compute the distance profile? What order do we expect the indices to be returned in?

For unit test, we can make sure the P values corresponding to those indices are close to each other (e.g. less than 1e-6). What do you think?

No, we should avoid this for this PR. This should only be a last resort as it will hide other issues. We will keep it in mind but 99% of the time, this is not needed. Btw, since the input arrays in the unit tests are random, I will usually run them 1,000+ times on my machine for new features/tests (especially if there is a big change in the test) before I push the commit. So I don't just run it once and assume that it's fine.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Apr 30, 2022

I wonder if we can use np.lexsort to sort on multiple columns after we compute the distance profile? What order do we expect the indices to be returned in?

For naive.stump, with row-wise traversal of distance matrix (i.e. the implementation that I pushed), we can do: np.argsort(..., kind= "stable"). I also checked out the document of np.lexsort and it should work as well.

What order do we expect the indices to be returned in?

Now that we use np.argsort(..., kind= "stable"), or alternatively np.lexsort, we can expect that the original order of indices are preserved throughout sorting.

No, we should avoid this for this PR. This should only be a last resort as it will hide other issues. We will keep it in mind but 99% of the time, this is not needed.

Correct! If there is no tie, the option kind="stable" may not be necessary. That is why I avoided it. And, If there is a tie, we will # ignore indices like what you did in the test_stump.py for cases with some identical subsequences.


Please correct me if I am wrong...
I think I am getting your point. You are trying to make sure that naive.stump works correctly with no hidden flaws. And, then, later, we can trust it and then do # ignore indices in case of testing stump with identical subsequences.

@seanlaw
Copy link
Contributor

seanlaw commented Apr 30, 2022

Now that we use np.argsort(..., kind= "stable"), or alternatively np.lexsort, we can expect that the original order of indices are preserved throughout sorting.

Note that kind="stable" only implies that, given the same input, the output is deterministic. However, it does not imply that it will necessarily match STUMPY's deterministic order. For example, you can have a sorting algorithm that is still considered "stable" if it always returns the second (or even last) index when there is a tie. To be consistent with other parts of STUMPY, please use kind="mergesort" instead.

I think I am getting your point. You are trying to make sure that naive.stump works correctly with no hidden flaws. And, then, later, we can trust it and then do # ignore indices in case of testing stump with identical subsequences.

No, not quite. Currently, I believe that naive.stump (not your version) is guaranteed to return the same indices as stumpy.stump even if there are multiple identical nearest neighbor matches. We want to maintain this and continue asserting that the indices are the same between the naive version and the fast version. When possible, we should never ignore the indices!

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Apr 30, 2022

Note that kind="stable" only implies that, given the same input, the output is deterministic. However, it does not imply that it will necessarily match STUMPY's deterministic order. For example, you can have a sorting algorithm that is still considered "stable" if it always returns the second (or even last) index when there is a tie.

I did not know that! Interesting.... Thanks for the info.

Currently, I believe that naive.stump (not your version) is guaranteed to return the same indices as stumpy.stump even if there are multiple identical nearest neighbor matches

It is supposed to be guaranteed (I think because they have the similar traversal method), but it is not:

seed = 0
np.random.seed(seed)
T = np.random.uniform(-100.0, 100.0, 20)
m = 3

seed = 1
np.random.seed(seed)
identical = np.random.uniform(-100.0, 100.0, m)

T[0:0+m] = identical
T[10:10+m] = identical
T[17:17+m] = identical

mp = stumpy.stump(T, m)
I = mp[:,1]
#I[10] is 0

naive_mp = original_naive_stump(T, m) #using naive.stump()
naive_I = naive_mp[:,1]
#naive_I[10] is 17

But, the indices of stumpy.stump and naive.stump should have been exactly the same. Right? Maybe I made a mistake somewhere. It would be great if you could try this example out on your end.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Apr 30, 2022

Our naive approach does NOT need to follow STUMPY's diagonal traversal.

Also, note that way we traverse matters...

traversal_distance matrix

If we go row-by-row, the 5-th distance profile is: A-B-C-D-E. However, if we go diagonally, We have
C-D-B-E-A. So, I think we should better stay with diagonal traversal in naive version. (Or, we may use if-condition to check ties and if there is a tie, choose the one with lower index while traversing diagonally)


As you said, naive.stump (not my version) should provide the same indices because it has diagonal traversal, similar to stumpy.stump. However, we just saw that sometimes the indices might differ (in my previous comment). Maybe something goes wrong when stumpy.stump distribute diags among threads. or maybe some small numerical errors, i.e. precisions, ? I have not investigated it yet.

@seanlaw
Copy link
Contributor

seanlaw commented Apr 30, 2022

Or, we may use if-condition to check ties and if there is a tie, choose the one with lower index while traversing diagonally

No, traversing diagonally should solve all of your problems. If you store a distance and later find another distance that is the same, you do not replace the first index no matter what that index is. You only replace the index if there is a smaller distance. The order in which the distances are encountered is key. This should still hold true for any k if done correctly

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 1, 2022

I am trying to clarify things and catch up with you. So, if you do not mind, please allow me do a quick recap to make sure I am on the same page as you are.

(1) We would like to believe that existing stumpy.stump and naive.stump give the same P and I EVEN IF there are identical subsequences. NOTE: I found some counter examples

(2) For now, let us assume (1) is correct and everything is 100% OK. If we change the naive.stump to traverse matrix row by row. (which is different than stumpy.stump that traverses diagonally), we would like to believe that they give the same P and I regardless of their traversal method. NOTE: after the investigation, I realized that I was wrong and different traversal methods give different matrix profile indices.


Therefore:

You only replace the index if there is a smaller distance. The order in which the distances are encountered is key

Exactly! I have the same thought. If we plan to have diagonal traversal in distance matrix in the function stumpy.stump, we need to traverse diagonally in the test function as well (and not row-by-row) so that we can expect that they give us the same matrix profile indices

EVEN IF we traverse diagonally in both stumpy.stump and naive.stump, we may still end up with two different matrix profile indices when there are ties.

(Btw, sorry for making you tired on this.)

@seanlaw
Copy link
Contributor

seanlaw commented May 1, 2022

(1) We would like to believe that existing stumpy.stump and naive.stump give the same P and I EVEN IF there are identical subsequences. NOTE: I found some counter examples

Yes, it's less a belief and more an expectation. What are the counter examples?

(2) For now, let us assume (1) is correct and everything is 100% OK. If we change the naive.stump to traverse matrix row by row. (which is different than stumpy.stump that traverses diagonally), we would like to believe that they give the same P and I regardless of their traversal method. NOTE: after the investigation, I realized that I was wrong and different traversal methods give different matrix profile indices.

So, I don't expect row-wise traversal would give the same result as diagonal-wise traversal. In fact, IIRC, I updated naive.stump from row-wise traversal to diagonal-wise traversal a while back because of this reason.

EVEN IF we traverse diagonally in both stumpy.stump and naive.stump, we may still end up with two different matrix profile indices when there are ties.

Why? How can this be possible?

@seanlaw
Copy link
Contributor

seanlaw commented May 1, 2022

Btw, you may want to add a naive.searchsorted something like:

#  tests/naive.py

def searchsorted(a, v):
    indices = np.flatnonzero(v < a)
    if len(indices):
        return indices.min()
    else:
        return len(a)

@seanlaw
Copy link
Contributor

seanlaw commented May 1, 2022

Btw, note that stumpy.gpu_stump is row-wise traversal. So, we may consider adding a diagonal_traversal=False flag to naive.stump in the future, which does row-wise traversal rather than diagonal-wise traversal.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 1, 2022

. What are the counter examples?

EVEN IF we traverse diagonally in both stumpy.stump and naive.stump, we may still end up with two different matrix profile indices when there are ties.

Why? How can this be possible?

I am providing one example below.

After further investigation, I think I found the cause of such difference. It seems the discrepancy between stumpy.stump and naive.stump has its root in the numerical error of Pearson Correlation in stumpy.stump (e.g. 1.0000000004 against 1.0000000007).

seed = 0
np.random.seed(seed)
T = np.random.uniform(-100.0, 100.0, 1000)
m = 50

seed = 1
np.random.seed(seed)
identical = np.random.uniform(-100.0, 100.0, m)

identical_start_idx = [0, 150, 300, 450, 600, 750, 900]
for idx in identical_start_idx:
    T[idx:idx + m] = identical
    
mp = stumpy.stump(T, m)
I = mp[:, 1]

naive_mp = original_naive_stump(T, m) #using naive.stump()
naive_I = naive_mp[:, 1]

# discrepancy
[(idx, naive_I[idx], I[idx]) for idx in np.flatnonzero(naive_I - I  !=  0)]
>>>
[(0, 150, 300),
 (300, 150, 900),
 (450, 300, 900),
 (600, 450, 0),
 (750, 600, 300),
 (900, 750, 300)]

What do you think? Should this (small / rare ?) issue be taken care of? As you correctly said, "it's less a belief and more an expectation." Now, I understand that we should expect outputs to be exactly the same as they follow similar process (The only difference is that stumpy.stump uses Pearson Correlation, and convert it to distance at the end)


So, I don't expect row-wise traversal would give the same result as diagonal-wise traversal. In fact, IIRC, I updated naive.stump from row-wise traversal to diagonal-wise traversal a while back because of this reason.

Got it. Thanks for the clarification.


you may want to add a naive.searchsorted something like:

Cool! Thanks for the suggestion! So, if I understand correctly, for naive row-wise traversal, I use np.argsort, and for naive diagonal traversal, I can use naive.searchsorted.

Btw, note that stumpy.gpu_stump is row-wise traversal

I did not take a look at that. Thanks for letting me know about this.

@seanlaw
Copy link
Contributor

seanlaw commented May 2, 2022

What do you think? Should this (small / rare ?) issue be taken care of?

No, I am less concerned about this example

So, if I understand correctly, for naive row-wise traversal, I use np.argsort, and for naive diagonal traversal, I can use naive.searchsorted.

That sounds reasonable.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 2, 2022

@seanlaw
If you think I am missing something, please let me know.

  • I used diagonal traversal to get top-k matrix profile in naive.stump
  • I added a note for row-wise traversal so that I do not forget about it later

If I understand correctly, first I should take care of stumpy.stump and make sure that it passes the new unit tests for top-k matrix profile.

Then, I need to work on naive version of stumpy.stamp to return top-k matrix profile via row-wise traversal, and then I need to work on stumpy.stamp (and gpu_stump) to make sure they passes their new test functions.

tests/naive.py Outdated Show resolved Hide resolved
tests/naive.py Outdated Show resolved Hide resolved
tests/naive.py Outdated Show resolved Hide resolved
tests/naive.py Outdated Show resolved Hide resolved
tests/naive.py Show resolved Hide resolved
@seanlaw
Copy link
Contributor

seanlaw commented May 3, 2022

If I understand correctly, first I should take care of stumpy.stump and make sure that it passes the new unit tests for top-k matrix profile.

Yes, this is correct. Not only should it pass the unit test for top-k but it should also pass all other unit tests without modifying those unit tests.

Then, I need to work on naive version of stumpy.stamp to return top-k matrix profile via row-wise traversal, and then I need to work on stumpy.stamp (and gpu_stump) to make sure they passes their new test functions.

Hmmm, so stamp should never come into consideration here. I think we need to update all of our unit tests to use naive.stump instead. Top-K should only be relevant to stump functions. All stamp functions are for legacy/reference and should never be used. In fact, we need to update all of our unit tests to call naive.stump instead of naive.stamp (this was likely missed over time as the code base grew). Does that make sense?

@NimaSarajpoor
Copy link
Collaborator Author

Yes, this is correct. Not only should it pass the unit test for top-k but it should also pass all other unit tests without modifying those unit tests.

Correct! All existing unit tests and the new one(s) for top-k.

Hmmm, so stamp should never come into consideration here. I think we need to update all of our unit tests to use naive.stump instead. Top-K should only be relevant to stump functions.

So, our goal is adding parameter k to only stumpy.stump.

All stamp functions are for legacy/reference and should never be used. In fact, we need to update all of our unit tests to call naive.stump instead of naive.stamp (this was likely missed over time as the code base grew). Does that make sense?

Did you mean ALL test functions in tests/test_stump.py? Because, in that case, it makes sense. But, not ALL tests? correct? My concern is stumpy.gpu_stump that uses row-wise traversal (?). I can see that tests/test_gpu_stump.py uses naive.stamp which does row-traversal as well. So, we should keep using naive.stamp in those cases. Did I get your point?

@seanlaw
Copy link
Contributor

seanlaw commented May 4, 2022

So, our goal is adding parameter k to only stumpy.stump.

Yes! And, eventually, to stumpy.stumped and stumpy.gpu_stump. Of course, we'll need the same thing for aamp, aamped, and gpu_aamp as well but I'm assuming that it would be trivial once we figure out the normalized versions.

Did you mean ALL test functions in tests/test_stump.py? Because, in that case, it makes sense. But, not ALL tests? correct? My concern is stumpy.gpu_stump that uses row-wise traversal (?). I can see that tests/test_gpu_stump.py uses naive.stamp which does row-traversal as well. So, we should keep using naive.stamp in those cases. Did I get your point?

So, I was thinking that we would update naive.stump to have a row_wise=False input parameter (in addition to k=1) which controls whether it is traversed row-wise or diagonal-wise. Of course, we would update the contents of the naive.stump function itself to also process row-wise. Then, we can update all of the tests accordingly by replacing naive.stamp with naive.stump(..., row_wise=True). I don't mind if naive.stump is a little bit harder to read as long as we add proper comments to it.

Does that make sense? If not, please feel free to ask more clarifying questions! Your questions are helping me think through things as well as it has been a while since I've had to ponder about this.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 4, 2022

@seanlaw

Of course, we would update the contents of the naive.stump function itself to also process row-wise. Then, we can update all of the tests accordingly by replacing naive.stamp with naive.stump(..., row_wise=True)

Got it. We do not touch stumpy.stump regarding this matter. We just add parameter row_wise=False to the naive version, so that we can perform unit tests for stumpy.stump and related functions with just one naive function naive.naive_stump.

Then, we can update all of the tests accordingly by replacing naive.stamp with naive.stump(..., row_wise=True). I don't mind if naive.stump is a little bit harder to read as long as we add proper comments to it. | Does that make sense?

It totally makes sense. Should we take care of this in parallel with this PR? like...at the end before finalizing everything?

@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
I think we have discussed the comments. Should I go ahead and apply comments? I will revise naive.stump so that it can support row-wise traversal as well.

@seanlaw
Copy link
Contributor

seanlaw commented May 5, 2022

Yes, please proceed!

@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
I am about to revise stumpy/stump.py as well to return top-k matrix profile. I will let you know when I am done so you can do the review. Thanks.

@seanlaw
Copy link
Contributor

seanlaw commented May 9, 2022

Should I review the recent commits or shall I wait for the stump.py changes?

@NimaSarajpoor
Copy link
Collaborator Author

Should I review the recent commits or shall I wait for the stump.py changes?

Please wait for the the stumpy.py changes.

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 10, 2022

@seanlaw
Just wanted to inform you of this behavior in np.searchsorted:

import numpy as np
a = np.array([0, 1, 2, 3, 4, 2, 0], dtype=np.float64)
v = 3.9
idx = np.searchsorted(a, v, side='right') # or side='left', does not matter here!
# idx = 7

So, if P stores top-k AND left & right matrix profile values (i.e. two last columns), we might better do:

idx = np.searchsorted(a[:k], v, side='right') 

I also did a quick search and it seems basic indexing creates a view rather than a copy, and it can be done in constant time. So,
it should be fine. What do you think?

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented May 10, 2022

Just wanted to inform you of this behavior...

Because of such behavior, and also because top-k rho values for each subsequence are in fact the top-k LARGEST rho values, I did as follows:

# rho: pearson 
# a: array of rho values, with `k+2` columns 
# first k columns correspond to top-k, and last two columns are for left/right mp

# update top-k
if rho > a[..., k-1]:
    idx = len(a) - np.searchsorted(a[..., :k][::-1], rho)

# insert the new value `rho` into the array `a`, at the index `idx`.

Alternatively, we can store top-k smallest -rho (please note the negative sign). But I think keeping track of +rho is cleaner.


The modified function stump.py, which accepts parameter k, passes the new (top-k matrix profile) unit test as well as the other tests in test_stump.py. Please let me know what you think.

@seanlaw
Copy link
Contributor

seanlaw commented Nov 7, 2022

We can create a new issue to just investigate this single matter. I mean we just compare stump with stump_with_splitted_pearson (without top-k feature). So, we just investigate the impact of splitting rho when size of time series is big)

Yes, let's do that. Considering that top-k new is similar to main, let's incorporate the unsigned int work into this and we should, in all, be faster than main when all is said and done. I think top-k new is fine.

At the end of the day:

  1. This PR is getting too long and should focus on adding a "good enough" top-k that has little affect on k=1
  2. All other top-k improvements can come in a new PR

How does that sound? Any questions, comments, or concerns?

@NimaSarajpoor
Copy link
Collaborator Author

Let's incorporate the unsigned int work into this and we should, in all, be faster than main

I will pull the latest changes to this branch.

At the end of the day:

  1. This PR is getting too long and should focus on adding a "good enough" top-k that has little affect on k=1
  2. All other top-k improvements can come in a new PR

How does that sound? Any questions, comments, or concerns?

Right! I was mixing things. As you said, the main goal is to (i) have a very little overhead for k=1 and (ii) have a reasonable performance for top-k.


So, I need to do three things here:

  • Update branch by pulling latest changes
  • Add test function for the new function added to core.py
  • Checkout the other changed files and their performance for time series with length 10k, 20k, 100k. And, take necessary actions to make sure the performance for k=1 is close to main.

@seanlaw
Copy link
Contributor

seanlaw commented Nov 7, 2022

So, I need to do three things here:

Actually, I might even prefer simply:

  1. Pull and incorporate latest changes
  2. Only have this PR include merge_topk without parallel=True and with _stump using parallel=True (i.e., none of the new functions for core.py)

So, you'd have to roll it back quite a fair bit to when I had already given the "okay" for a previous version. This way, I don't have to revisit all of the files when you add all of the subsequent changes in a later PR. Does that make sense?

Probably somewhere before this point. Though, you still had parallel=True turned on for merge_topk

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Nov 7, 2022

@seanlaw

Does that make sense?

Probably somewhere before this point. Though, you still had parallel=True turned on for merge_topk

I can do that.

Just to make sure we are on the same page: I will undo the changes to get to this point. I assume you, for now, are "okay" with the overhead it added to the computation. And, we want to tackle the performance later in another PR.

(btw, the least thing we can do is to just avoid using splitted ρ to reduce overhead. see one of your previous comments. So, I can undo all of these recent changes and just use one ρ- this is going to be the only change in stump, and if it improves top-k for k==1 I will keep it otherwise I will not change it, and then we will merge this PR.)

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Nov 7, 2022

@seanlaw
I undo the changes and now we are at this point (so everything is good). I also pulled the latest changes and resolved the conflicts.

(btw, the least thing we can do is to just avoid using splitted ρ to reduce overhead. see #595 (comment). So, I can undo all of these recent changes and just use one ρ- this is going to be the only change in stump, and if it improves top-k for k==1 I will keep it otherwise I will not change it, and then we will merge this PR.)

Please let me know what you think :)

@seanlaw
Copy link
Contributor

seanlaw commented Nov 8, 2022

Thank you @NimaSarajpoor! I will find some time to review it

@seanlaw seanlaw changed the title [WIP] Add Top-K Nearest Neighbors Matrix Profile #592 [WIP] Add Top-K Nearest Neighbors Matrix Profile to STUMP* #592 Nov 8, 2022
@seanlaw
Copy link
Contributor

seanlaw commented Nov 8, 2022

@NimaSarajpoor I think everything looks good. Would you mind doing a performance comparison again just to confirm that we have reached the same state as before?

Once we confirm this then I think we can merge.

Btw, before we re-focus on the performance of stump* top-k, I think we should add focus on aamp* top-k first and complete all of that. This way, we can at least be at a functional state that is consistent everywhere and performance can come later. What do you think?

@NimaSarajpoor
Copy link
Collaborator Author

Btw, before we re-focus on the performance of stump* top-k, I think we should add focus on aamp* top-k first and complete all of that. This way, we can at least be at a functional state that is consistent everywhere and performance can come later. What do you think?

I think that is a reasonable approach. Let's add support for top-k in both normalized and non-normalized first.

Would you mind doing a performance comparison again just to confirm that we have reached the same state as before?

Will do so :)

@seanlaw seanlaw changed the title [WIP] Add Top-K Nearest Neighbors Matrix Profile to STUMP* #592 [WIP] Add Top-K Nearest Neighbors for Normalized Matrix Profile #592 Nov 8, 2022
@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Nov 8, 2022

In below, I am providing the performance of main and top-k (k==1) for different branch. I also included top-k(k=10) just to show how top-k performs for k > 1. One can ignore this and just focus on comparing main with top-k (k==1).

n is the length of time series T (input), and m is the window size.

stump

stump (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.0007 0.076 0.293 2.8 17.82 102.28
top-k (k==1) 0.0092 0.185 0.517 2.59 13.93 83.87
top-k (k==10) 0.0451 0.66 2.28 14.67 - -

NOTE: top-k performs better for n=50_000, n=100_000 , and n=200_000
NOTE: Compared to this table, the computing time of stump in both main and top-k is now lower thanks to the benefit we got from using np.uint64.

stumped

stumped (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.06 0.138 0.406 2.94 18.51 104.79
top-k (k==1) 0.068 0.21 0.5 2.54 14.61 88.5
top-k (k==10) 0.094 0.673 2.36 15.16 - -

NOTE: top-k performs better for n=50_000, n=100_000 , and n=200_000

gpu_stump

LATER! SEE the BOTTOM of THIS COMMENT

prescrump

prescrump (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.0028 0.076 0.297 2.233 10.58 57.82
top-k (k==1) 0.0072 0.1205 0.3808 2.385 10.29 58.87
top-k (k==10) 0.017 0.4089 1.058 4.74 - -

I think the computing time of top-k (k==1) should be generally higher here since we check for duplicates in top-k prescrump. So, the results are reasonable.

scrump (with prescrump=False)

note: applied .update twice

scrump_prescrump==False (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.0076 0.056 0.114 0.354 0.97 3.288
top-k (k==1) 0.016 0.1129 0.23 0.61 1.42 3.835
top-k (k==10) 0.021 0.382 0.986 3.19 - -

stumpi (with egress=True)

note: applied .update once

stumpi_engress==True (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.009 0.134 0.405 3.072 18.29 96.19
top-k (k==1) 0.016 0.19 0.631 2.64 14.28 77.42
top-k (k==10) 0.044 0.667 2.29 15.54 - -

Again, note that top-k performs better for n=50_000, n=100_000, and n=200_000. (You may want to check it in your pc to see if my conclusion is correct so far or not)


gpu_stump

gpu_stump (m=50) n=1000 n=10_000 n=20_000 n=50_000 n=100_000 n=200_000
main 0.3 2.12 4.2 10.47 21.92 50.56
top-k (k==1) 0.37 2.92 5.59 13.92 26.95 60.02
top-k (k==10) 0.38 2.83 5.57 14.57 - -

Personally, I do not like the performance of top-k gpu_stump (notice that its computing time is about 20% greater than main for n=100_000 and n=200_000). So, I locally (on my local branch in my pc) revised this function and used one array $\rho$ instead of using separate arrays for top-k profile, left profile, and right profile. So, I avoided splitting arrays. The computing time of n=100_000 and n=200_000 then become around 22 and 47, respectively! (very close to main).

If you think we should revise gpu_stump, I can push its commit. If you think it is okay, please feel free to merge it :)

@seanlaw
Copy link
Contributor

seanlaw commented Nov 9, 2022

@NimaSarajpoor Almost there. It looks like we aren't hitting some lines of code according to coverage tests:

Name             Stmts   Miss  Cover   Missing
----------------------------------------------
stumpy/core.py     589      3    99%   2694-2696
tests/naive.py    1122      5    99%   1890-1894
----------------------------------------------
TOTAL            12209      8    99%

It looks like we don't actually have any code that tests for when if ρA.ndim == 1. Can you please add a test for this? Should there have been a test that would've triggered this if condition? It looks like the same issue in naive.py as well. I'm guessing the same input would trigger both being traversed.

Thus, it's important to not only look for tests that pass/fail but also that we are achieving 100% coverage (without using # pragma: no cover).

@NimaSarajpoor
Copy link
Collaborator Author

NimaSarajpoor commented Nov 9, 2022

Thus, it's important to not only look for tests that pass/fail but also that we are achieving 100% coverage (without using # pragma: no cover).

I am trying to understand why I missed this! I thought the result of coverage should show up here. Right? But, I cannot find it now. After your comment, I checked out the Github Action log and found it there. But, I should have seen it here on this PR page as well. right?

It looks like we don't actually have any code that tests for when if ρA.ndim == 1. Can you please add a test for this?

I just took a look and noticed that there was a typo in test function tests/test_core.py::test_merge_topk_ρI_with_1D_input. I mistakenly called the function _merge_topk_PI instead of _merge_topk_ρI. I fixed it. I will push the commit(s).

@NimaSarajpoor
Copy link
Collaborator Author

@seanlaw
I think I took care of the coverage. It is now standing at 100%.

@seanlaw
Copy link
Contributor

seanlaw commented Nov 9, 2022

I am trying to understand why I missed this! I thought the result of coverage should show up here. Right? But, I cannot find it now. After your comment, I checked out the Github Action log and found it there. But, I should have seen it here on this PR page as well. right?

I'm not sure. Sometimes I see it but I don't see it here either. 🤷‍♂️ Anyhow, no worries and thank you for addressing it.

I think I took care of the coverage. It is now standing at 100%.

@NimaSarajpoor Thank you for this wonderful contribution! Your dedication will be greatly appreciated by the matrix profile community! There is still a bit of follow up work but I appreciate all of your time and efforts.

@seanlaw seanlaw merged commit e8a7dd4 into TDAmeritrade:main Nov 9, 2022
@NimaSarajpoor
Copy link
Collaborator Author

@NimaSarajpoor Thank you for this wonderful contribution! Your dedication will be greatly appreciated by the matrix profile community! There is still a bit of follow up work but I appreciate all of your time and efforts.

Thank you for allowing me to take this opportunity. It has been a wonderful journey. I learned a lot :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants