Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow DumpLayerConnections() #3142

Open
xavierotazuGDS opened this issue Mar 8, 2024 · 4 comments · May be fixed by #3160
Open

Slow DumpLayerConnections() #3142

xavierotazuGDS opened this issue Mar 8, 2024 · 4 comments · May be fixed by #3160
Labels
I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) S: Normal Handle this with default priority stale Automatic marker for inactivity, please have another look here T: Enhancement New functionality, model or documentation
Projects

Comments

@xavierotazuGDS
Copy link

Is your feature request related to a problem? Please describe.
Execution of DumpLayerConnections() is probably too slow (see code below).

Describe alternatives you've considered
Given the present implementation of DumpLayerConnections(), a solution would be to profile the implementation and improve speed.

Another possible solution (in addition to the previous one) could be to add another version of DumpLayerConnections() that accepts a collection of connections as input parameter. Something similar to:

my_conn = nest.GetConnections(layer1, layer2, ...)
nest.DumpConnections(my_conn)

Additional context
The following code takes 1m20s to execute with NEST 3.6 (git master head from february 12th) Python 3.8.12, RockyLinux 8.5

import nest

pos = nest.spatial.grid(shape = [100,100] )

input_l = nest.Create('iaf_psc_alpha', positions=pos)
layer_0 = nest.Create('iaf_psc_alpha', positions=pos)

conn_neur = {'rule':'pairwise_bernoulli', 'mask': {'grid':{'shape':[10,10]}} }
syn_0 = {'synapse_model': 'static_synapse'}

nest.Connect(input_l, layer_0, conn_neur, syn_0)

nest.DumpLayerConnections(input_l, layer_0, 'static_synapse', 'conn.txt')

@xavierotazuGDS
Copy link
Author

The following version of dump_connections() reduces the execution time to 9 seconds. I checked the results are correct and exactly the same as the original method.

template < int D >
void
Layer< D >::dump_connections( std::ostream& out,
NodeCollectionPTR node_collection,
AbstractLayerPTR target_layer,
const Token& syn_model )
{
std::vector< std::pair< Position< D >, size_t > >* src_vec = get_global_positions_vector( node_collection );

// Dictionary with parameters for get_connections()
DictionaryDatum conn_filter( new Dictionary );
def( conn_filter, names::synapse_model, syn_model );
def( conn_filter, names::target, NodeCollectionDatum( target_layer->get_node_collection() ) );

// Avoid setting up new array for each iteration of the loop
// std::vector< size_t > source_array( 1 );

// for ( typename std::vector< std::pair< Position< D >, size_t > >::iterator src_iter = src_vec->begin();
// src_iter != src_vec->end();
// ++src_iter )
// {

// const size_t source_node_id = src_iter->second;
// const Position< D > source_pos = src_iter->first;

// source_array[ 0 ] = source_node_id;
// def( conn_filter, names::source, NodeCollectionDatum( NodeCollection::create( source_array ) ) );
def( conn_filter, names::source, NodeCollectionDatum( node_collection ) );
ArrayDatum connectome = kernel().connection_manager.get_connections( conn_filter );

size_t previous_source_node_id = getValue< ConnectionDatum >( connectome.get( 0 ) ).get_source_node_id();
 Position< D > source_pos = src_vec->begin()->first;

// Print information about all local connections for current source
for ( size_t i = 0; i < connectome.size(); ++i )
{
  ConnectionDatum con_id = getValue< ConnectionDatum >( connectome.get( i ) );
  const size_t source_node_id = con_id.get_source_node_id();

  // Search source_pos for source node only if it is a different node
  if(source_node_id != previous_source_node_id)
	{
		source_pos = src_vec->begin()->first;

		for ( typename std::vector< std::pair< Position< D >, size_t > >::iterator src_iter = src_vec->begin();
		  src_iter != src_vec->end() && source_node_id!=src_iter->second;
		  ++src_iter, source_pos =  src_iter->first);

		previous_source_node_id = source_node_id;
	}

// DictionaryDatum result_dict = kernel().connection_manager.get_synapse_status( con_id.get_source_node_id(),
DictionaryDatum result_dict = kernel().connection_manager.get_synapse_status( source_node_id,
con_id.get_target_node_id(),
con_id.get_target_thread(),
con_id.get_synapse_model_id(),
con_id.get_port() );

  long target_node_id = getValue< long >( result_dict, names::target );
  double weight = getValue< double >( result_dict, names::weight );
  double delay = getValue< double >( result_dict, names::delay );

  // Print source, target, weight, delay, rports
  out << source_node_id << ' ' << target_node_id << ' ' << weight << ' ' << delay;

  Layer< D >* tgt_layer = dynamic_cast< Layer< D >* >( target_layer.get() );

  out << ' ';
  const long tnode_lid = tgt_layer->node_collection_->get_lid( target_node_id );
  assert( tnode_lid >= 0 );
  tgt_layer->compute_displacement( source_pos, tnode_lid ).print( out );
  out << '\n';
}

// }
}

@xavierotazuGDS
Copy link
Author

xavierotazuGDS commented Mar 18, 2024

I send this even improved version, considering (I do not know if it is true) that both connectome and source nodes positions (src_vec) are ordered by source node. I checked the results are correct and exactly the same as the original version.

template < int D >
void
Layer< D >::dump_connections( std::ostream& out,
  NodeCollectionPTR node_collection,
  AbstractLayerPTR target_layer,
  const Token& syn_model )
{
std::vector< std::pair< Position< D >, size_t > >* src_vec = get_global_positions_vector( node_collection );

// Dictionary with parameters for get_connections()
DictionaryDatum conn_filter( new Dictionary );
def( conn_filter, names::synapse_model, syn_model );
def( conn_filter, names::target, NodeCollectionDatum( target_layer->get_node_collection() ) );

def( conn_filter, names::source, NodeCollectionDatum( node_collection ) );
ArrayDatum connectome = kernel().connection_manager.get_connections( conn_filter );

size_t previous_source_node_id = getValue< ConnectionDatum >( connectome.get( 0 ) ).get_source_node_id();
typename std::vector< std::pair< Position< D >, size_t > >::iterator src_iter = src_vec->begin();
 Position< D > source_pos = src_iter->first;

// Print information about all local connections for current source
for ( size_t i = 0; i < connectome.size(); ++i )
{
  ConnectionDatum con_id = getValue< ConnectionDatum >( connectome.get( i ) );
  const size_t source_node_id = con_id.get_source_node_id();

  // Search source_pos for source node only if it is a different node
  if(source_node_id != previous_source_node_id)
	{
		assert((src_iter++)!=src_vec->end());
		source_pos = src_iter->first;

		previous_source_node_id = source_node_id;
	}

  DictionaryDatum result_dict = kernel().connection_manager.get_synapse_status( source_node_id,
    con_id.get_target_node_id(),
    con_id.get_target_thread(),
    con_id.get_synapse_model_id(),
    con_id.get_port() );

  long target_node_id = getValue< long >( result_dict, names::target );
  double weight = getValue< double >( result_dict, names::weight );
  double delay = getValue< double >( result_dict, names::delay );

  // Print source, target, weight, delay, rports
  out << source_node_id << ' ' << target_node_id << ' ' << weight << ' ' << delay;

  Layer< D >* tgt_layer = dynamic_cast< Layer< D >* >( target_layer.get() );

  out << ' ';
  const long tnode_lid = tgt_layer->node_collection_->get_lid( target_node_id );
  assert( tnode_lid >= 0 );
  tgt_layer->compute_displacement( source_pos, tnode_lid ).print( out );
  out << '\n';
}

}

@xavierotazuGDS
Copy link
Author

I propose this solution at pull request #3160

@terhorstd terhorstd added T: Enhancement New functionality, model or documentation S: Normal Handle this with default priority I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) labels Mar 20, 2024
@terhorstd terhorstd added this to To do (open issues) in Kernel via automation Mar 20, 2024
Copy link

Issue automatically marked stale!

@github-actions github-actions bot added the stale Automatic marker for inactivity, please have another look here label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I: No breaking change Previously written code will work as before, no one should note anything changing (aside the fix) S: Normal Handle this with default priority stale Automatic marker for inactivity, please have another look here T: Enhancement New functionality, model or documentation
Projects
Kernel
  
To do (open issues)
Development

Successfully merging a pull request may close this issue.

2 participants