snudda.input.input

class snudda.input.input.SnuddaInput(network_path=None, snudda_data=None, input_config_file=None, spike_data_filename=None, hdf5_network_file=None, time=10.0, is_master=True, h5libver='latest', rc=None, random_seed=None, time_interval_overlap_warning=True, logfile=None, verbose=False, use_meta_input=True)[source]

Generates input for the simulation.

Constructor.

Parameters
  • network_path (str) – Path to network directory

  • snudda_data (str) – Path to Snudda Data

  • input_config_file (str) – Path to input config file, default input.json in network_path

  • spike_data_filename (str) – Path to output file, default input-spikes.hdf5

  • hdf5_network_file (str) – Path to network file, default network-synapses.hdf5

  • time (float) – Duration of simulation to generate input for, default 10 seconds

  • is_master (bool) – “master” or “worker”

  • h5libver (str) – Version of HDF5 library to use, default “latest”

  • rc – ipyparallel remote client

  • random_seed (int) – Random seed for input generation

  • time_interval_overlap_warning (bool) – Warn if input intervals specified overlap

  • logfile (str) – Log file

  • verbose (bool) – Print logging

check_sorted()[source]

Checks that spikes are in chronological order.

static create_spike_matrix(spikes)[source]

Creates a spike matrix from a list of spikes.

static cull_spikes(spikes, p_keep, rng, time_range=None)[source]

Keeps a fraction of all spikes.

Parameters
  • spikes – Spike train

  • p_keep – Probability to keep each spike

  • rng – Numpy random number stream

  • time_range – If p_keep is vector, this specifies which part of those ranges each p_keep is for

dendrite_input_locations(neuron_id, rng, synapse_density=None, num_spike_trains=None, cluster_size=None, cluster_spread=3e-05)[source]

Return dendrite input location.

Parameters
  • neuron_id – Neuron ID

  • rng – Numpy random number stream

  • synapse_density (str) – Distance function f(d)

  • num_spike_trains (int) – Number of spike trains

  • cluster_size (int) – Size of each synaptic cluster (None = No clustering)

  • cluster_spread (float) – Spread of cluster along dendrite (in meters)

static estimate_correlation(spikes_a, spikes_b, dt=0)[source]

Estimate correlation between spikes_a and spikes_b, assuming correlation window of dt.

Parameters
  • spikes_a

  • spikes_b

  • dt

generate()[source]

Generates input for network.

generate_poisson_spikes_helper(frequencies, time_ranges, rng)[source]

Generates spike trains with given frequencies within time_ranges, using rng stream.

Parameters
  • frequencies (list) – List of frequencies

  • time_ranges (list) – List of tuples with start and end time for each frequency range

  • rng – Numpy random stream

generate_seeds(num_states)[source]

From the master seed, generate a seed sequence for inputs.

generate_spikes_function(frequency_function, time_range, rng, dt=0.0001, p_keep=1)[source]

Generates frequency based on frequency_function.

Args
frequency_function: vector based python function taking t as argument, returning momentary frequency

if it is not a python then numexpr.evaluate is run on it (with t as argument) OBS: t passed to the function is 0 at the stimultion start time, e.g. for a stimulus that starts at time 4s seconds, f(t=0) is calculated, and at the end 5s f(t=1) is calculated.

time_range: Interval of time to generate spikes for rng: Numpy rng object dt: timestep

generate_spikes_function_helper(frequencies, time_ranges, rng, dt, p_keep=1)[source]

Generates spike trains with given frequencies within time_ranges, using rng stream.

Parameters
  • frequencies (list) – List of frequencies

  • time_ranges (list) – List of tuples with start and end time for each frequency range

  • rng – Numpy random stream

  • dt – timestep

get_master_node_rng()[source]

Get random number for master node, from master seed.

static jitter_spikes(spike_trains, dt, rng, time_range=None)[source]

Jitter spikes in a spike train.

If a time_range (start,end_time) is given then all spike times will be modulo duration, so if we jitter and they go to before start time, they wrap around and appear at end of the timeline

Parameters
  • spike_trains – spike times

  • dt – amount of jitter

  • rng – Numpy random stream

  • time_range (tuple) – (start, end) see comment above about wrapping around edges.

make_correlated_spikes(freq, time_range, num_spike_trains, p_keep, rng, population_unit_spikes=None, ret_pop_unit_spikes=False, jitter_dt=None, input_generator=None)[source]

Make correlated spikes.

Parameters
  • freq (float or str) – frequency of spike train

  • time_range (tuple) – start time, end time of spike train

  • num_spike_trains (int) – number of spike trains to generate

  • p_keep (float or list of floats) – fraction of shared channel spikes to include in spike train, p_keep=1 (100% correlated)

  • rng – Numpy random number stream

  • population_unit_spikes

  • ret_pop_unit_spikes (bool) – if false, returns only spikes, if true returns (spikes, population unit spikes)

  • jitter_dt (float) – amount to jitter all spikes

  • input_generator (str) – “poisson” (default) or “frequency_functon”

make_input_helper_parallel(args)[source]

Helper function for parallel input generation.

make_input_helper_serial(neuron_id, input_type, freq, t_start, t_end, synapse_density, num_spike_trains, population_unit_spikes, jitter_dt, population_unit_id, conductance, correlation, mod_file, parameter_file, parameter_list, random_seed, cluster_size=None, cluster_spread=None, dendrite_location=None, input_generator=None, population_unit_fraction=1)[source]

Generate poisson input.

Parameters
  • neuron_id (int) – Neuron ID to generate input for

  • input_type – Input type

  • freq – Frequency of input

  • t_start – Start time of input

  • t_end – End time of input

  • synapse_density – Density function f(d), d=distance to soma along dendrite

  • num_spike_trains – Number of spike trains

  • jitter_dt – Amount of time to jitter all spikes

  • population_unit_spikes – Population unit spikes

  • population_unit_id – Population unit ID

  • conductance – Conductance

  • correlation – correlation

  • mod_file – Mod file

  • parameter_file – Parameter file for input synapses

  • parameter_list – Parameter list (to inline parameters, instead of reading from file)

  • random_seed – Random seed.

  • cluster_size – Input synapse cluster size

  • cluster_spread – Spread of cluster along dendrite (in meters)

  • dendrite_location – Override location of dendrites, list of (sec_id, sec_x) tuples.

  • input_generator – “poisson” or “frequency_function”

  • population_unit_fraction – Fraction of population unit spikes used, 1.0=all correlation within population unit, 0.0 = only correlation within the particular neuron

make_neuron_input_parallel()[source]

Generate input, able to run in parallel if rc (Remote Client) has been provided at initialisation.

make_population_unit_spike_trains(rng)[source]

Generate population unit spike trains. Each synaptic input will contain a fraction of population unit spikes, which are taken from a stream of spikes unique to that particular population unit This function generates these correlated spikes

make_uncorrelated_spikes(freq, t_start, t_end, n_spike_trains, rng)[source]

Generate uncorrelated spikes.

Parameters
  • freq – frequency

  • t_start – start time

  • t_end – end time

  • n_spike_trains – number of spike trains to generate

  • rng – numpy random number stream

static mix_fraction_of_spikes(spikes_a, spikes_b, fraction_a, fraction_b, rng, time_range=None)[source]

Picks fraction_a of spikes_a and fraction_b of spikes_b and returns sorted spike train

Parameters
  • spikes_a (np.array) – Spike train A

  • spikes_b (np.array) – Spike train B

  • fraction_a (float) – Fraction of spikes in train A picked, e.g 0.4 means 40% of spikes are picked

  • fraction_b (float) – Fraction of spikes in train B picked

  • rng – Numpy rng object

  • time_range – (start_times, end_times) for the different fractions

static mix_fraction_of_spikes_OLD(spikes_a, spikes_b, fraction_a, fraction_b, rng)[source]

Picks fraction_a of spikes_a and fraction_b of spikes_b and returns sorted spike train

Parameters
  • spikes_a (np.array) – Spike train A

  • spikes_b (np.array) – Spike train B

  • fraction_a (float) – Fraction of spikes in train A picked, e.g 0.4 means 40% of spikes are picked

  • fraction_b (float) – Fraction of spikes in train B picked

  • rng – Numpy rng object

static mix_spikes(spikes)[source]

Mixes spikes in list of spike trains into one sorted spike train.

plot_spikes(neuron_id=None)[source]

Plot spikes for neuron_id

static raster_plot(spike_times, mark_spikes=None, mark_idx=None, title=None, fig_file=None, fig=None)[source]

Raster plot of spike trains.

Parameters
  • spike_times

  • mark_spikes – list of spikes to mark

  • mark_idx – index of neuron with spikes to mark

  • title – title of plot

  • fig_file – path to figure

  • fig – matplotlib figure object

read_input_config_file()[source]

Read input configuration from JSON file.

read_network_config_file()[source]

Read network configuration JSON file.

setup_parallel()[source]

Setup worker nodes for parallel execution.

verify_correlation(spike_trains, dt=0)[source]

Verify correlation. This function is slow.

Parameters
  • spike_trains

  • dt

write_hdf5()[source]

Writes input spikes to HDF5 file.

write_log(text, flush=True, is_error=False, force_print=False)[source]

Writes to log file. Use setup_log first. Text is only written to screen if self.verbose=True, or is_error = True, or force_print = True.

test (str) : Text to write flush (bool) : Should all writes be flushed to disk directly? is_error (bool) : Is this an error, always written. force_print (bool) : Force printing, even if self.verbose=False.