Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About how you efficiently calculate the winner and the neighbourhood function. #163

Open
youyinnn opened this issue Mar 6, 2023 · 1 comment
Labels

Comments

@youyinnn
Copy link

youyinnn commented Mar 6, 2023

Hi, thanks for the excellent work.

I have a few doubts because I am bad at math.

I notice that you use this:

np.linalg.norm(input - weight, axis=-1)

To calculate the euclidean distance matric-wise. Could you explain more about this? How did you come up with this approach?

Another part is the neighbourhood function:
image
Here, you handle it with the knowledge of linear algebra.

Could you provide some material on how you calculated them?

The way you implement them is highly efficient compared with the regular (someone like me who is terrible at math 😢) implementation.

Following are my implementations:

from scipy.spatial import distance
def winner(input, net):
    # normal way, extremely low performance
    # dis_map = np.apply_along_axis(distance.euclidean, 2, net, input)
    dis_map = np.linalg.norm(input - net, axis=-1)
    return np.unravel_index(np.argmin(dis_map, axis=None), dis_map.shape)

Also, for the update, you use the Einstein Summation Convention, it's hard to connect it to the way you calculate the update formula
image

def process(x, net, sigma, lr, iterations):
    np.random.shuffle(x.copy())
    X, Y = np.meshgrid(np.arange(0, net.shape[0], 1), np.arange(0, net.shape[1], 1))
    for i in tqdm(range(iterations)):
        # competition
        t = i % len(x) - 1
        winner_idx = winner(x[t], net)
        
        # update
        eta_    = asymptotic_decay(lr, t, iterations)       # η(t)  learning rate
        sigma_  = asymptotic_decay(sigma, t, iterations)    # σ(t)  neighborhood size
        
        # neighborhood function (27, 27)
        gau     = neighborhood_function(X, Y, winner_idx, sigma_)
        
        # normal way, very low performance
        # broadcast substraction delta (27, 27, 2)
        # delta = x[t] - net
        # for i in range(net.shape[0]):
        #     for j in range(net.shape[1]):
        #         net[i,j] += (gau * eta_)[i,j] * delta[i,j]
        
        net     += np.einsum('ij, ijk->ijk', gau * eta_, x[t] - net)
                
    return net
@JustGlowing
Copy link
Owner

hi @youyinnn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants