0% found this document useful (0 votes)
31 views19 pages

Fninf 08 00063

This article introduces a new software tool called Optimizer that allows users to fit the parameters of neuronal models to experimental data. Optimizer implements a modular framework for neuronal model optimization and includes a graphical user interface. It supports a variety of cost functions, optimization algorithms, and neuronal simulators. The software is designed to be easy to use yet also customizable for advanced applications.

Uploaded by

mishraaryan846
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views19 pages

Fninf 08 00063

This article introduces a new software tool called Optimizer that allows users to fit the parameters of neuronal models to experimental data. Optimizer implements a modular framework for neuronal model optimization and includes a graphical user interface. It supports a variety of cost functions, optimization algorithms, and neuronal simulators. The software is designed to be easy to use yet also customizable for advanced applications.

Uploaded by

mishraaryan846
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

METHODS ARTICLE

published: 10 July 2014


NEUROINFORMATICS doi: 10.3389/fninf.2014.00063

A flexible, interactive software tool for fitting the


parameters of neuronal models
Péter Friedrich 1,2 , Michael Vella 3 , Attila I. Gulyás 1 , Tamás F. Freund 1,2 and Szabolcs Káli 1,2*
1
Laboratory of Cerebral Cortex Research, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, Hungary
2
Faculty of Information Technology, Péter Pázmány Catholic University, Budapest, Hungary
3
Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, UK

Edited by: The construction of biologically relevant neuronal models as well as model-based analysis
Eilif Benjamin Muller, Blue Brain of experimental data often requires the simultaneous fitting of multiple model parameters,
Project, EPFL, Switzerland
so that the behavior of the model in a certain paradigm matches (as closely as possible) the
Reviewed by:
corresponding output of a real neuron according to some predefined criterion. Although
Moritz Helias, Jülich Research
Centre and JARA, Germany the task of model optimization is often computationally hard, and the quality of the results
Werner Van Geit, EPFL, Switzerland depends heavily on technical issues such as the appropriate choice (and implementation)
*Correspondence: of cost functions and optimization algorithms, no existing program provides access to
Szabolcs Káli, Laboratory of Cerebral the best available methods while also guiding the user through the process effectively.
Cortex Research, Institute of
Our software, called Optimizer, implements a modular and extensible framework for
Experimental Medicine, Hungarian
Academy of Sciences, Szigony u. the optimization of neuronal models, and also features a graphical interface which
43., Budapest H-1083, Hungary makes it easy for even non-expert users to handle many commonly occurring scenarios.
e-mail: [email protected] Meanwhile, educated users can extend the capabilities of the program and customize
it according to their needs with relatively little effort. Optimizer has been developed in
Python, takes advantage of open-source Python modules for nonlinear optimization, and
interfaces directly with the NEURON simulator to run the models. Other simulators are
supported through an external interface. We have tested the program on several different
types of problems of varying complexity, using different model classes. As targets,
we used simulated traces from the same or a more complex model class, as well as
experimental data. We successfully used Optimizer to determine passive parameters and
conductance densities in compartmental models, and to fit simple (adaptive exponential
integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons
show that Optimizer can handle a wider range of problems, and delivers equally good or
better performance than any other existing neuronal model fitting tool.
Keywords: neuronal modeling, python, software, simulation, model fitting, parameter optimization, graphical user
interface

INTRODUCTION 2006) and often rather complex. Accordingly, the task of find-
Currently available experimental data make it possible to create ing the optimal parameter values is highly non-trivial, and has
increasingly complex multi-compartmental conductance-based been the subject of extensive research (Vanier and Bower, 1999;
neuron models, which have the potential to imitate the behav- Keren et al., 2005; Huys et al., 2006; Druckmann et al., 2007, 2008;
ior of real neurons with great accuracy (De Schutter and Bower, Gurkiewicz and Korngreen, 2007; Van Geit et al., 2007, 2008;
1994a,b; Poirazi et al., 2003; Hay et al., 2011). However, these Huys and Paninski, 2009; Rossant et al., 2010, 2011; Eichner and
models have many parameters, which are often poorly (or, at Borst, 2011; Hendrickson et al., 2011; Bahl et al., 2012; Svensson
best, indirectly) constrained by the available data. One alterna- et al., 2012; Vavoulis et al., 2012).
tive to using detailed biophysical models, which is often used These studies have proposed a variety of methods to find
in network simulations, is to utilize much simpler (e.g., reduced the best-fitting model; the main differences concern the way in
compartmental or integrate-and-fire type) model neurons. These which the output of the model is compared to the target data
have fewer parameters; however, the remaining parameters are (the cost function), and the procedure used to come up with
often not directly related to the underlying biophysics, and need new candidate solutions (the optimization algorithm). There are
to be set such that the behavior of the model cell best approx- also several existing software solutions to this problem; notably,
imates that of the real neuron (Naud et al., 2008; Gerstner and the general-purpose neural simulators NEURON (Carnevale and
Naud, 2009; Rossant et al., 2011). In most cases, the relation- Hines, 2006) and GENESIS (Bower and Beeman, 1998) both
ship between the values of the parameters and the output of the offer some built-in tools for parameter search (Vanier and Bower,
model is nonlinear (for an interesting exception, see Huys et al., 1999), and some programs [such as Neurofitter (Van Geit et al.,

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 1


Friedrich et al. Software for fitting neuronal models

2007) and Neurotune1] have been specifically developed for this can be guaranteed to find this global optimum in a reasonable
purpose. However, most of these tools offer a very limited choice amount of time. In these latter cases, different optimization algo-
of cost functions and/or optimization algorithms (and adding rithms use qualitatively different strategies to come up with good
new ones is typically not straightforward), and thus it becomes solutions (which may or may not be the globally optimal one).
difficult to apply them to new scenarios and to take advantage of Local algorithms (such as gradient descent) find the best solu-
new developments. In addition, few of these existing tools offer an tion in a limited part of the parameter space (typically defined
intuitive user interface which would guide the casual user through by the initial set of parameters from which the search begins);
the steps of model optimization, although an increasing num- global algorithms (such as evolutionary algorithms and simu-
ber of laboratories now use computer simulations to complement lated annealing) employ various heuristic strategies to explore
experimental approaches, and employ model-based techniques to the parameter space more extensively, while taking advantage of
extract relevant variables from their data, which typically require intermediate results to come up with new promising candidate
the fitting of multiple model parameters. solutions.
In this article, we describe a new software tool called All of the components above may have almost infinitely many
Optimizer2, which attempts to address all of these issues. It offers variants, so it may seem hopeless to create a simple user interface
an intuitive graphical user interface (GUI), which handles all of which allows one to specify such a large variety of problems effec-
the main tasks involved in model optimization, and gives the user tively. However, several facts help alleviate this problem to some
access to a variety of commonly used cost functions and optimiza- extent. First, a large percentage of the use cases that occur in prac-
tion algorithms. At the same time, it is straightforward to extend tice are covered by a limited set of components; for instance, many
the capabilities of the program in many different ways due to its electrophysiological experiments apply either current clamp or
modular design, which allows more advanced users to adapt the voltage clamp configurations with step stimuli while recording
software to their particular needs. the membrane potential or holding current, respectively. These
common situations can be managed effectively from a GUI.
DESIGN GOALS AND PRINCIPLES Second, for the models themselves, which show the largest pos-
The full specification of a model optimization problem requires sible variability, there are widely used structured descriptions,
one to provide the following pieces of information: (1) the form partly in generic formats [such as NeuroML (Gleeson et al., 2010)
of the model, both at an abstract level (e.g., multi-compartmental and NineML3 ], and partly in the form of model definitions in
model with a given morphology and channel specifications, or the languages of neural simulators (such as NEURON’s .hoc and
integrate-and-fire model of a given type) and as a specific imple- .mod files). These descriptions may be read and manipulated by
mentation (e.g., a set of .hoc and .mod files in NEURON); (2) the the model optimization software, and also directly lead to code
set of tunable parameters in the model (along with their possi- which may be executed by the simulators. Finally, the nature of
ble ranges); (3) the simulation protocol, i.e., the way the model the task is modular; the same ingredients may be combined in
should be stimulated and the variables to be recorded; (4) the many different ways so that it becomes possible, for example,
target data (from experiments, or sometimes from a different to implement a new cost function and then use it in combina-
model); (5) the cost function, i.e., a measure of how different the tion with existing models, simulation protocols, and optimization
output of a particular simulated model is from the target data algorithms.
(this may be as simple as the sum of squared error over corre- Following these considerations, we set two (apparently con-
sponding data points, or may involve the extraction and com- flicting) goals for our implementation of model optimization.
parison of various features from the simulations and the target On one hand, we wanted our solution to be as flexible as pos-
data). If there are multiple error measures (objectives), one pos- sible, supporting a wide and easily extensible range of model
sible approach, called single-objective optimization, is to define types, modeling software, simulation protocols, cost functions,
a single combined cost function by appropriately weighting the and optimization algorithms. On the other hand, we wanted to
different objectives. Another approach, known as multi-objective provide a simple and intuitive interface which would guide the
optimization, is to treat each error measure separately, and look user through a variety of commonly occurring model optimiza-
for a whole set of optimal solutions which represent different tion scenarios, without requiring programming expertise or deep
trade-offs between the objectives. Once the problem has been knowledge of the optimization process.
fully specified, the last critical ingredient is the algorithm which In order to attain both of these goals, we decided to imple-
attempts to solve the model optimization problem by finding ment an essentially two-level design. The lower level (back end)
the set of tunable parameters which minimizes the cost func- would define the components of the model optimization work-
tion, i.e., the parameters for which the output of the model is flow as described above, as well as the ways these must interact
as similar as possible to the target data. For relatively simple with each other to solve the task. This would be implemented in a
problems, several common algorithms will be able to find the highly modular fashion to allow the independent replacement of
single best set of parameters to a high degree of precision in a individual elements as well as the straightforward addition of new
relatively short time; for very complex problems, no algorithm elements. The higher level (front end) would be implemented pri-
marily as a GUI (although a command-line interface would also
1 http://optimal-neuron.readthedocs.org/en/latest/ be provided to allow non-interactive processing). The front ends
2 Software available at https://github.com/vellamike/optimizer, online docu-
mentation at http://optimizer.readthedocs.org/ 3 http://software.incf.org/software/nineml

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 2


Friedrich et al. Software for fitting neuronal models

would allow the user to select from the components provided by on the pyelectro package6. The GUI was implemented using
the back end, and to set all relevant options and parameters within the wxPython package7, a wrapper for the C++ GUI toolkit
these components. In addition, the GUI would also provide some wxWidgets.
basic tools for the inspection of data and the results of model The program has a modular structure, and each module han-
optimization. dles distinct tasks (Figure 1). The modules are the following:

IMPLEMENTATION (1) Core module:


The Python programing language was the obvious choice for the This is the main module for the software. It interacts with all
implementation of the software. First, Python offers the necessary the other modules, and performs the necessary steps of model
power and flexibility to handle the task. Second, the open source optimization via the methods of the coreModule:
modules offered by Python already include solutions to many
(a) reading input data
important sub-tasks, such as data handling, visualization, and
(b) loading the model file and selecting the parameters subject
non-linear optimization. Almost all of the commonly used neural
to optimization
simulation packages now have a Python interface (Eppler et al.,
(c) setting up the stimulation and recording protocol
2008; Goodman and Brette, 2009; Hines et al., 2009; Cornelis
(d) selecting a fitness function (or a weighted combination)
et al., 2012). This makes Python an optimal tool for the creation
(e) selecting the algorithm with different parameters and
of the aforementioned framework.
performing the optimization
The software can interface directly with NEURON to read,
(f) storing the configuration file and saving the results of
modify, and run models described in NEURON’s own format.
optimization in various formats
Other simulators are supported indirectly as “black boxes” which
communicate with Optimizer through files, and return sim- (2) traceHandler module:
ulation results based on parameters generated by Optimizer. Contains the main data holder class, called Data, which
Optimization itself can be carried out using a selection of (local encapsulates the Trace class which is responsible for the han-
and global) algorithms from the inspyred4 and scipy5 pack- dling of an independent trace set. The Data class is also
ages. Most cost functions are implemented within Optimizer, responsible for reading the input data files appropriately. The
except for Phase Plane Trajectory Density (PPTD), which relies traceHandler module also contains functions performing

4 http://inspyred.github.io/ 6 http://pyelectro.readthedocs.org/
5 http://www.scipy.org/ 7 http://www.wxpython.org/

FIGURE 1 | Schematic representation of the design of the software, showing the main components of Optimizer (within the blue shaded area),
interactions among its modules and with critical external modules.

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 3


Friedrich et al. Software for fitting neuronal models

subtasks related to data handling, such as unit conversion. combination of the given fitness functions, and calculates
Currently the Data class can handle only one set of data of the accumulated fitness value over the corresponding pairs
identical types (e.g., a series of voltage traces of given length), of traces in the simulated and target data sets.
but we are planning to support multiple sets of data with
different types (e.g., voltage traces plus explicit spike times) PROGRAM CAPABILITIES AND BASIC USAGE
as well as abstract inputs (such as the values of extracted Depending on the exact needs and degree of expertise of the user,
features). the software can be used at three different levels of complexity. At
(3) modelHandler module: the simplest level the user can perform optimization tasks using
Contains two main classes which are responsible for handling the built-in tools of the graphical interface or run the optimiza-
neuronal models. The first one, called modelHandlerNeuron, tion from the command line using a configuration file. At the next
is responsible for the built-in support of the NEURON envi- level the user can extend various capabilities of the GUI by using
ronment, and handles the various stimulation protocols and external files (see below). At the most advanced level the user can
recording methods which are directly accessible for models construct his/her own optimization framework using the build-
implemented in NEURON, as well as parameter assign- ing blocks provided by the package, or extend its functionality by
ment and other model-related tasks. The second class, called adding new algorithms or fitness functions. To support this last
externalHandler, is responsible for the handling of external, level, we concentrated on structural simplicity while creating the
user-specified simulators. modules.
(4) optionHandler module: As we briefly mentioned earlier, model implementations for
A simple container class to hold the specified settings. This certain simulators (currently NEURON) can be handled, inter-
class can also read and write a configuration file. preted, and modified internally by Optimizer (“internal sim-
(5) optimizerHandler module: ulators”), while models designed for other simulators can be
This module contains the implementations of the differ- optimized as “black boxes” (i.e., only looking at their inputs
ent optimization algorithms as separate classes, along with and outputs), and only if they provide their own interface to
assorted auxiliary functions such as parameter normaliza- Optimizer (by taking inputs and producing outputs in the format
tion, boundary selection, etc. expected by Optimizer; see Appendix). These “external simu-
The user can extend the list of algorithms by implement- lators” must take care of setting up the simulations (including
ing a new class within this module and adding a single line the model itself, but also the stimulation and recording pro-
to the Core module. To make the new algorithm available tocols), but they can still take advantage of the variety of cost
via the GUI one must add the name of the algorithm to the functions and powerful optimization algorithms provided by
appropriate list. Optimizer. Internal simulators are supported at a much higher
Five different algorithms are currently implemented, level; in particular, their internal parameters can be viewed
including a customized evolutionary algorithm and a simple and selected for optimization, and several common simula-
simulated annealing algorithm from the inspyred package, as tion protocols (such as current and voltage clamp recordings
well as the scipy implementations of simulated annealing, the using step stimuli) can be set up directly from the Optimizer
downhill simplex method, and the L-BFGS-B algorithm (see GUI.
next section for details). There are two parts of the specification of the model opti-
(6) fitnessFunctions module: mization problem where several commonly occurring scenarios
Contains the class responsible for implementing and using are difficult to capture by a few discrete choices and continu-
the different cost functions (fitness functions). The module ous parameters, and are thus inconvenient to control entirely
also contains a class to handle spike objects, which are used from a GUI. First, while the GUI allows the user to select for
in various cost functions. To extend the list of available func- optimization any combination of the parameters of a NEURON
tions the user can implement his/her own function here as a model, this does not cover the frequent cases where multiple
class method. To make the new function available, the user model parameters (at the level of the actual implementation) are
must add the alias name-function object pair to the list of controlled by a single (or a few) abstract parameters, or there
cost functions, and add the alias name for the function which are other kinds of joint constraints on the model parameters.
will appear in the GUI to the Core module. For example, when we wish to determine the passive biophysical
The currently available cost functions are the following properties of a realistic multi-compartmental model based on the
(see next section for detailed descriptions): mean squared measured response to injected current, we normally do not want
error, mean squared error excluding spikes, spike count, spike to consider the membrane resistance values of all the dendritic
count during stimulus, ISI differences, latency to first spike, sections as independent parameters (which would lead to a very
AP overshoot, AP width, AHP depth, derivative difference, high number of free parameters and an intractable optimization
PPTD. The PPTD method is available through the exter- problem); instead, we take as the free parameter the value of the
nal pyelectro module, while the rest are implemented by specific membrane resistance, and calculate the corresponding
Optimizer. values of the actual membrane resistance (or leak conductance)
As the program supports arbitrary combinations of these in each part of the cell based on the measured geometry. In
cost functions, the main method in this class is the combine- order to allow the distinction between the (potentially abstract)
Features function, which creates the appropriate weighted parameters set by the optimization algorithms and the actual

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 4


Friedrich et al. Software for fitting neuronal models

parameters of a particular model implementation, and to allow amplitude of the experimental trace. AP amplitude is defined
the implementation of an arbitrary mapping between the two, we as the difference of the AP peak voltage and the AP threshold.
introduced “user functions” into our model optimization frame- AP width: the average squared difference of the width of APs,
work. These user functions define the (abstract) parameters to be normalized by the squared average width of experimental
optimized, and also define (using NEURON’s Python interface) APs.
how these abstract parameters should control the internal param- AHP depth: the squared average of the differences in after-
eters of the NEURON simulation (see Appendix for details). This hyperpolarization depth, normalized by the squared range of
solution also makes it possible to optimize parameters of the sim- subthreshold potential in the target trace.
ulation which are not strictly part of the model neuron (such as PPTD: Compares the two traces in the phase plane using the
the properties of incoming synaptic connections, as demonstrated method proposed by Van Geit et al. (2007), as implemented
by one of the use cases described below). by the pptd_error function from the pyelectro package.
Second, while current and voltage steps are fairly common
ways of stimulating a neuron, and their characteristics are easily Many of these cost functions have associated parameters which
specified by a handful of parameters which are straightforward may be set by the user (although sensible default values are also
to control from a GUI or a configuration file, many other stim- provided). For instance, several cost functions require the detec-
ulation protocols are also widely used in both experiments and tion of spikes, and these allow the setting of the action potential
simulations to characterize the behavior of neurons in different detection threshold, while the subthreshold version of the mean
ways. Some of these protocols (such as sine wave and noise squared error cost function also allows setting of the width of the
stimulation) will probably be added to the GUI in the future. exclusion window around each spike.
Meanwhile, we opted for the more generic approach of allowing Optimizer also supports arbitrary linear combinations of these
input whose time dependence is given explicitly in an external file cost functions. In order to ensure that the weights given actu-
(see Appendix). We demonstrate the utility of this approach in ally correspond to the relative importance of the component
one of the use cases described below, where input to the neuron cost functions in determining the final cost value, all individ-
consisted of two consecutive current pulses of different duration ual cost functions are normalized in appropriate ways such that
and amplitude. their possible values are (at least approximately) in the 0–1 range,
One of the most critical choices in setting up a model opti- as described above. When the input data and the corresponding
mization problem involves the cost function (or fitness function), simulation results consist of multiple traces, the cost functions
as this choice (along with the simulation protocol) determines return the sum of the cost values over the corresponding pairs of
the sense in which the behavior of the optimized model neuron traces.
should be close to that of the target neuron. The importance of As the implemented functions all use pointwise comparisons
this choice is also reflected in the large variety of different cost at some stage of the calculations, we had to guarantee that the
functions which have been proposed, and we aimed to provide appropriate points are compared. This becomes a problem when
access to many of these within Optimizer. The software currently the user wants to compare two traces sampled at different fre-
supports the following cost functions (full details can be found in quencies (these traces would have different numbers of points but
the package reference part of the online documentation): correspond to the same length of time). We solved this issue by
applying the following rules:
Mean squared error: the mean squared difference of the two If the sampling frequency of the input is higher than the
traces in a point by point manner, normalized by the squared model’s sampling frequency, then the simulation time step is
range of the experimental data. adjusted appropriately. If the sampling frequency of the input
Mean squared error excluding spikes: the same as above, but is lower than the model’s sampling rate, then the input is re-
compares only the subthreshold part of both traces, excluding sampled at the model’s sampling frequency using linear interpo-
parts of both traces in time windows of a given width around lation. Note that, after re-sampling, the program considers the
each spike. re-sampled trace to be the input trace, and if the original data
Derivative difference: the normalized average squared differ- are required for any reason, they must be reloaded.
ence of the temporal derivatives of the given traces. Although a very large selection of algorithms have been pro-
Spike count: the absolute difference in the number of spikes posed for the solution of nonlinear optimization problems, we
in the entire trace, normalized by the sum of the two spike decided to focus (at least initially) on methods which have proved
counts (plus one, to properly handle the case with no spikes) to be efficient for neuronal model fitting. In particular, both evo-
Spike count during stimulus: the same as above, but only lutionary (genetic) algorithms and simulated annealing methods
takes into account spikes which occur during the time of the have been used successfully to fit models with up to tens of param-
stimulus. eters (Vanier and Bower, 1999), so we included both of them in
ISI differences: the sum of the absolute differences between the list of supported optimization algorithms. In fact, as different
the corresponding inter-spike intervals in the two traces, implementations can heavily affect performance, we included two
normalized by the length of the traces. different implementations of the simulated annealing algorithm
Latency to 1st spike: the squared difference in the latency of (one from the inspyred and another from the scipy package). In
the first spikes, normalized by the squared length of the traces. addition to these global optimization methods, we also included
AP overshoot: the average squared difference of action two options for local optimization: the classic downhill simplex
potential amplitudes, normalized by the squared maximal AP method, and the L-BFGS-B algorithm, which is considered to be

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 5


Friedrich et al. Software for fitting neuronal models

one of the state-of-the-art local methods (Byrd et al., 1995). We conveniently from a GUI. The GUI consists of seven so-called lay-
found that all the problems we have considered could be solved ers, which are responsible for guiding the user through the steps
efficiently using one or more of these methods; however, the pro- of the optimization process. A detailed guide to the GUI, with
gram can also be easily extended with additional algorithms. As screenshots and explanations of all of its components, is avail-
several algorithms work best when all the parameters to be opti- able online from the documentation page of Optimizer8, and is
mized are of similar magnitude, while the actual parameters may also included with the software itself, so only a brief summary
have very different magnitudes, we run the algorithms with nor- will be provided here. The graphical interface can be started from
malized parameters (0–1) and pass the re-normalized values to the command prompt with the line:
the simulator. By default, the algorithms start from a random python optimizer.py -g
point or set of points (within the specified boundaries), but the Once the program has started, the first layer will appear, where
user can select a specific starting point, which will be the initial the user can select the file containing the input trace(s). The user
point of the algorithm or will be part of the initial set of points. must specify the path to this file, and the working directory (base
The optimization algorithms currently supported by Optimizer directory) where the output of the program will be written. In
are the following: addition, the user must provide the type and basic characteris-
tics of the trace set. After loading the selected file, the traces are
GLOBAL ALGORITHMS listed in a tree display, and their successful loading can be verified
Evolutionary algorithm in a plot which displays all the traces concatenated (concatena-
Minimizes the error function using a customized evolutionary tion is performed only for displaying purposes, and the traces are
algorithm, which uses generational replacement with weak elitism otherwise handled separately).
(so that the best solution is always retained) and Gaussian muta- On the second layer the user can specify the simulator (cur-
tion in combination with blend crossover (see the documentation rently NEURON or “external,” see above). With NEURON as the
of the inspyred package for details). The size of the population simulator, the model can be loaded simply after selecting the main
(which may be set by the user, and defaults to 100) is constant .hoc file as the model file; if the model requires .mod files which
throughout the process. The mutation rate can also be specified, reside in a different directory, the location of this folder must also
with a default value of 0.25. be provided. The model file should contain only the specification
of the neuron and the necessary mechanisms. We note that, in the
Simulated annealing 1 current version of Optimizer, loading a model (.hoc file) whose
Uses the framework of evolutionary computation (as imple- special mechanisms (compiled .mod files) cannot be found, leads
mented by the inspyred package with simulated annealing to a situation from which the program cannot recover the correct
replacement). The parameters which can be adjusted by the model, and the software should be restarted.
user include the number of generations, the rate and standard Once the model is loaded successfully, the content of the model
deviation of Gaussian mutation, the initial temperature and the will be displayed, and the user can select parameters by pick-
cooling rate. ing them in the list and pressing the “set” button. Removing a
parameter is done in a similar fashion (Figure 2).
Simulated annealing 2
As mentioned earlier, the functionality of the GUI can be
Uses a canonical simulated annealing algorithm (Kirkpatrick
extended by using external files. The second layer also allows the
et al., 1983) (as implemented in scipy). Adjustable parameters
user to load or define the “user function” which defines a set of
include the number of generations, the cooling schedule, the ini-
abstract parameters to be optimized, and describes the mapping
tial and final temperature, the dwell time, the mutation rate, and
from these abstract parameters to actual parameters of the model
the error tolerance.
implementation (in NEURON).
The next layer specifies the simulation protocol, and con-
LOCAL ALGORITHMS
tains the settings regarding stimulation and recording. The user
Downhill simplex method
can select the stimulation protocol which can be either current
Uses the Nelder-Mead simplex algorithm (Nelder and Mead,
clamp or voltage clamp. The stimulus type can also be selected
1965) to find a local minimum of the cost function. The
(currently, either step protocol or custom waveform). If the step
adjustable parameters are the maximum number of iterations,
protocol is selected, the properties of the step can be specified.
and independent tolerance limits for the input vector and the
Multiple stimuli of different amplitudes can also be specified; via
value of the cost function.
the GUI, the user can provide up to 10 stimulus amplitudes. If
L-BFGS-B custom waveform is selected as stimulus type, the time course of
Uses the limited-memory Broyden-Fletcher-Goldfarb-Shanno the stimulus can be loaded from an external file specified by the
algorithm with bound constraints (L-BFGS-B) (Byrd et al., 1995) user. Finally, the user must choose a section and a position inside
to minimize the cost function. The maximum number of itera- that section to stimulate the model.
tions and the target accuracy can be set by the user. In the second column of this layer, the parameters controlling
the simulation and the recording process can be given. The user
USAGE OF THE GRAPHICAL INTERFACE must give an initial voltage parameter, the length of the simulation
As briefly discussed above, all the basic functionality of Optimizer,
along with many of its more advanced features, can be accessed 8 http://optimizer.readthedocs.org/en/latest/

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 6


Friedrich et al. Software for fitting neuronal models

FIGURE 2 | Screenshot from the Optimizer GUI, showing the model selection and parameter selection interface.

and the integration time step (variable time step methods are not configuration file in XML format, which can also be used to run
supported yet). The user can select the parameter to be measured the optimization using the command-line interface (see below).
(either current or voltage), the section and the position where the The last layer offers some additional tools to analyze the
measurement takes place. results. Here, the software displays the basic statistics of the
The next layer is responsible for the selection of the cost func- last population of results. If one of the algorithms from the
tion, or combination of cost functions with the desired weights. inspyred package (such as the evolutionary algorithm or its
Optimizer offers weight normalization with the press of a button, implementation of simulated annealing) was used, a generation
but unnormalized values are accepted as well. The user can fine- plot (which displays the change in the cost value from generation
tune the behavior of the cost functions by giving parameters to to generation) and an allele plot are also available.
them (the value of the same parameter should be the same across The final analytical tool offered by Optimizer is the grid plot
the functions). which evaluates and plots the cost function over a set of param-
On the next layer, the user can select the desired optimiza- eters, thus allowing the user to observe a part of the search space
tion algorithm from a list and tune its parameters. The program (Figure 3). The parameter set is created by fixing every parameter
requires boundaries for all the parameters. The user can also except one to their optimal values and allowing the remaining one
provide initial values for the parameters, which will be inter- parameter to vary. By repeating this process for every parameter,
preted differently depending on the algorithm used. In the case we obtain one-dimensional slices of the cost function around the
of local algorithms, the algorithm will start form the point spec- optimum. Ranges for the grid plot are initialized to the bound-
ified. In the case of global algorithms, the set of values given will aries of the search space defined earlier, but they can be reset to
be included in the initial set of parameters. At this point, the wider or narrower ranges (the latter can be useful to observe the
model optimization problem is fully specified, and optimization close proximity of the optimum), providing an insight into the
will start when the Run button is pressed. model’s parameter sensitivity.
After the program finished the optimization process, the result
can be viewed and compared to the target data in a graph. This USAGE OF THE COMMAND LINE INTERFACE
result is then saved into a text file, picture files in .png and .eps The command line interface can be started similarly to the GUI
formats, and into an HTML file. The text file contains the data (with a different option), but requires an additional argument,
trace(s) obtained from the model by running the simulation with the name of the configuration file. The program can be started by
the optimal parameters. The picture shows the target and the typing:
resulting trace for visual comparison. The HTML file serves as python optimizer.py -c conf.xml
a report, as it contains the most important settings of the opti- The configuration file must be an XML file, which contains the
mization process as well as the resulting parameter values and settings for the optimization process in XML tags. Every option
a plot of the target and result trace. The program also saves all has its own XML tag and the value between the tags must be in
the settings required to reproduce the optimization process in a the appropriate format for the software to recognize. This feature

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 7


Friedrich et al. Software for fitting neuronal models

FIGURE 3 | An example of the grid plot of Optimizer, showing one-dimensional slices of the error as a function of the parameters, in the vicinity of an
optimum found by the software.

was added to support systems where a graphical interface is not our results on a selection of five problems, chosen primarily to
needed or not available (the optimization must run without user showcase the diversity of tasks that Optimizer is able to solve,
interaction). but also to highlight the features of the software that enable us
As this interface is considered an auxiliary one, it currently has to efficiently define and solve these problems. All of the examples
no error detection implemented; e.g., a missing parameter will be were run on standard desktop and laptop PCs running various
detected only during runtime. Thus we recommend that the user versions of Linux (for details, see the Installation section of the
generate the configuration file via the GUI by running a simple online documentation). A full optimization run required from a
optimization, and modify the resulting file where necessary. few minutes up to about 2 days, depending on the complexity
of the model, the number of iterations, and (in the case of the
USE CASES evolutionary algorithm) the size of the population. All the files
We designed Optimizer to be able to handle a wide range of model required for an exact reproduction of these examples, as well as
optimization tasks, and we have tested it on a large number of the results of the optimization runs, have been deposited into the
different problems. Model optimization problems can differ in public repository of the software. The simplest way to re-run one
many characteristics (including model type, tunable parameters, of the examples involves using the XML file provided with the
simulation protocol, target data, and cost function), and can be command line interface described above.
attacked using various optimization methods, as described earlier.
One important aspect of the problem that we have not discussed 1. Recovering the correct conductance densities (Na, K, leak) of a
is the source and nature of the target data. A traditional way single-compartment Hodgkin-Huxley model based on a single
of testing a model optimization algorithm is to generate target (suprathreshold) step current injection.
data from a model, and then consider some of the parameters to
be unknown and attempt to reconstruct their correct values by We created in NEURON a single-compartment model contain-
optimizing the same type of model. This type of target data will ing the original Na+ , K+ , and leak conductances of the classical
be referred to as surrogate data, and tests using surrogate data Hodgkin-Huxley model, and set the diameter of the section to
are useful to debug software, and also to analyze the difficulty 10 µm (the length remained 100 µm, and all the passive parame-
of optimization tasks and the power of optimization algorithms. ters and the conductance densities were unchanged). We injected
However, it has been pointed out that tests using surrogate data a step current (amplitude = 200 pA, delay = 200 ms, duration =
are very artificial in that an exact solution (a parameter com- 500 ms) into the soma of this model to create surrogate data (a
bination with zero error) is known to exist, and methods that single 1000 ms long voltage trace). We then changed the densi-
perform well on surrogate data do not necessarily do well on ties of the three conductances to create variants of the original
real data (Druckmann et al., 2008). Therefore, we have tested model, and tried to find the parameters used for generating the
Optimizer using surrogate data, but also on problems where an target trace with Optimizer’s genetic algorithm (Classical EO; 100
exact solution is unlikely to exist. This includes the case where generations with 100 candidates per generation), using the com-
the target data were generated by a more complex model than the bination of the mean squared error (excluding spikes) and the
one being optimized, and fitting is thus performed as a crucial spike count error functions with equal (0.5) weights. The original
part of model simplification, and also the case where the target and recovered parameters are displayed in Table 2. The two traces
data were recorded in a physiological experiment. Here we present can be visually compared in Figure 4, which also shows how the

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 8


Friedrich et al. Software for fitting neuronal models

lowest value of the cost function changed across generations. It is in Use case 1, and contained Hodgkin-Huxley-type Na+ , K+ , and
interesting to note that while the best-fitting trace found matches leak conductances plus a conductance-based synapse with a dou-
the target data very well, the algorithm did not manage to recover ble exponential time course (rise time = 0.3 ms, decay time =
the exact values of the original parameters. This likely reflects 3 ms, maximal conductance = 10 nS, delay = 2 ms). The model
the fact that several different combinations of parameters result neuron received through the synapse a spike train input, which
in similar voltage traces in response to this current input. This consisted of 4 spikes at regular 100 ms intervals. The task was to
possibility was further investigated by repeating the optimization recover the four parameters of the synaptic connection.
process using different random seeds, which resulted in different As we needed to set the parameters of the synapse and the con-
final parameters, but similarly good fits to the data. These findings nection (NEURON’s NetCon object), and the heuristics used by
supported the conclusion that the conductance density parame- Optimizer to discover tunable parameters in NEURON models
ters of the Hodgkin-Huxley model are not uniquely identifiable automatically do not cover synaptic parameters (which belong
using this current injection protocol, but also confirmed that to objects other than the model neuron), we used a simple user
Optimizer was consistently able to find parameter combinations function to adjust the parameters. We used the built-in functions
which provide good solutions to this optimization problem. of the Optimizer GUI to set up voltage clamp at a constant level
(−70 mV); one way to accomplish this is to use a step protocol in
2. Recovering some basic synaptic parameters from simulated voltage clamp with a single amplitude of −70 mV (and arbitrary
voltage clamp recordings during synaptic stimulation in a delay and duration), and an initial voltage of −70 mV.
single-compartment model. Optimization was carried out using the mean squared error
cost function. Evolutionary optimization (Classical EO) for 100
The target data consisted of the recorded clamp current generations with a population of 100 restored the original param-
from a virtual voltage clamp electrode inserted into a single- eters with high precision (Table 3; Figure 5). In this case, starting
compartment model, which was essentially the same as the one the program with different random seeds always resulted in

FIGURE 4 | The results of Optimizer on Use case 1 (fitting trace (red). Further details of the spike shape are shown in Figure 9.
conductance densities in the Hodgkin-Huxley model). (A) (B) Evolution of the lowest and median error value across generations
Comparison of the original (surrogate) data (blue) and the best-fitting in the evolutionary algorithm.

FIGURE 5 | The results of Optimizer on Use case 2 (fitting synaptic parameters in voltage clamp). Figure layout and notation are similar to Figure 4; the
error plot in (B) also shows the average and worst fitness values for each generation.

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 9


Friedrich et al. Software for fitting neuronal models

essentially the same final parameters and consistently low error 4. Fitting the passive parameters of a morphologically detailed
values. CA1 pyramidal cell model to experimental data based on a
complex current clamp stimulus.
3. Fitting the densities of somatic voltage-gated channels in
a simplified (6-compartment) model to approximate the In this case we tried to fit the passive parameters of a morpho-
somatic voltage response of a morphologically and biophysi- logically detailed passive model of a hippocampal CA1 pyramidal
cally detailed (CA1 PC) model to a somatic current step, using cell to physiological data recorded from the same neuron (both
a combination of features. morphological and physiological data were kindly provided by
Miklós Szoboszlay and Zoltán Nusser). The cell was excited by
The target data trace was obtained from the biophysically accu- a short (3 ms, 500 pA) and then by a long (600 ms, 10 pA) cur-
rate and morphologically detailed model of a hippocampal CA1 rent pulse (separated by 300 ms) injected into the soma, which
pyramidal cell (Káli and Freund, 2005) by stimulating the somatic is more complex than the simple step stimuli which can be
section with a 200 pA step current stimulus. The experiment defined using the Optimizer GUI, so we had to use an external
lasted for 1000 ms and the stimulus started at 200 ms and lasted stimulus file.
for 600 ms. The parameters we were interested in were the specific capac-
The structure of the simplified model was created before itance and resistance of the membrane and the specific axial
the optimization step by clustering the branches of the detailed resistance. Because we wanted to optimize the parameters cm,
model based on the amplitude of the response to subthreshold Ra, and g_pas in every section of the NEURON model (and
current stimuli, and combining the branches within each clus- also set the e_pas parameter to 0 everywhere in this example,
ter into one compartment of the reduced model. The resulting as the baseline voltage had been subtracted from the data), we
model had six compartments (one somatic, one basal dendritic, created a user function to set all the relevant local parameters
and four corresponding to different parts of the apical dendritic of the model based on the three global parameters which were
tree). Initial values of the densities of voltage-gated channels optimized.
in the simplified model were obtained by averaging the cor- This example demonstrates the importance of the extensi-
responding values in the detailed model. The somatic values bility of the GUI using external files. We used mean squared
of the nine channel density parameters were then the subjects error as the cost function, and Classical EO; 100 generations
of optimization, while dendritic conductance densities, passive and 100 candidates per generation were sufficient to get a good
membrane characteristics, and geometric properties remained fit to the data (Figure 7; further details of the fit are shown in
fixed. Figure 12).
In this case we used a combination of six different cost func-
tions: mean squared error excluding spikes (with weight 0.2), 5. Optimizing the parameters of an abstract (AdExpIF) model to
spike count (with weight 0.4), latency to first spike, AP ampli- fit the somatic voltage responses (including spikes) of a real
tude, AP width, and AHP depth (all four with weight 0.1). The cell (CA3 PC) to multiple current step stimuli.
optimization algorithm was Classical EO, and in this case we
used 200 generations and 300 candidates per generation to allow In this case we wanted to fit an adaptive exponential integrate-
a better exploration of the relatively high-dimensional param- and-fire model to four voltage traces obtained from a real
eter space. The algorithm managed to find a reasonably good CA3 pyramidal cell. The recordings were 1100 ms long each
solution to this difficult problem, closely matching all of the and the sampling frequency was 5 kHz. The stimulating cur-
optimized features (Figure 6; additional details are shown in rent amplitudes were 0.30, 0.35, 0.40, and 0.45 nA, respectively.
Figure 11). We then optimized the parameters of the model (capacitance,

FIGURE 6 | The results of Optimizer on Use case 3 (fitting voltage traces from a detailed compartmental model). Figure layout and notation are similar
to Figure 4.

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 10


Friedrich et al. Software for fitting neuronal models

leak conductance, leak reversal potential, threshold voltage, reset COMPARISONS WITH OTHER MODEL OPTIMIZATION TOOLS
voltage, refractory period, steepness of exponential part of the FEATURE COMPARISONS
current-voltage relation, subthreshold adaptation conductance, Existing publicly available tools for the optimization of neuronal
spike adaptation current, adaptation time constant—altogether models include NEURON, GENESIS, and Neurofitter. We will
10 parameters). As the exponential integrate-and-fire model can now briefly discuss the merits and deficiencies of each of these
be numerically unstable for some combinations of parameters, we solutions in comparison to our software.
had to apply some constraints to the parameters (for example: NEURON features the only GUI-based solution9 besides ours,
the spike detection threshold was equal to the spike threshold for and integrates fully with the most commonly used simulator
exponential calculations plus five times the steepness of exponen- today. It also includes many useful features, such as the ability
tial approach to threshold). To do this, we created a user-defined to combine results from an arbitrary set of simulations, and to
function which was loaded by the GUI. We used the combination define several regions of interest under visual guidance, which are
of the spike count, mean squared error (excluding spikes), latency not yet available in Optimizer. As a consequence, it has been used
to first spike, and ISI difference features (which are all meaningful by several groups, mostly for fitting a few parameters in relatively
in the context of integrate-and-fire models) with equal weights as simple cases. As the Multiple Run Fitter contains only a single
the error function, and obtained our results once again using the relatively basic local optimization algorithm (the principal axis
Classical EO algorithm with 100 generations and 500 candidates method; Brent, 2002), it may not be suitable for more complex
per generation (Figure 8). While the resulting model captures the problems. Although an extension to NEURON using genetic algo-
spiking of the neuron relatively well, it clearly cannot deal with rithms has been developed10, it has not been very widely adapted,
the complexities of the subthreshold voltage trace (which is likely
due mainly to limitations of the model class itself rather than the 9 http://www.neuron.yale.edu/neuron/static/docs/optimiz/main.html
fitting process). 10 http://senselab.med.yale.edu/simtooldb/ShowTool.asp?Tool=102464

FIGURE 7 | The results of Optimizer on Use case 4 (fitting the parameters of a morphologically detailed passive multi-compartmental model to
experimental data). Figure layout and notation are similar to Figures 4, 5. Magnified plots of critical parts of the traces are included in Figure 12.

FIGURE 8 | The results of Optimizer on Use case 5 (fitting an adaptive exponential integrate-and-fire model to experimental data with multiple
traces). The four traces are displayed in concatenated form in the figure. Figure layout and notation are similar to Figures 4, 5.

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 11


Friedrich et al. Software for fitting neuronal models

possibly because (unlike the Multiple Run Fitter and Optimizer) for NEURON, GENESIS, and Neurofitter. We did not manage to
using this extension requires a substantial amount of coding. implement all of the use cases on any of the three other soft-
The optimization tools of the GENESIS simulator (Vanier ware tools (other than Optimizer). In the end, we successfully
and Bower, 1999) cannot be accessed through a graphical inter- completed model optimization in use cases 1–4 using NEURON.
face, and model fitting involves extensive programing in its own We also managed to implement use cases 1–4 using GENESIS,
script language (whereas, given an implementation of the model although each of these required a substantial programing and
itself, optimizing model parameters in Optimizer or NEURON’s debugging effort; we used the simulated annealing algorithm in
Multiple Run Fitter requires little or no programming). Thus, this case, as the GENESIS implementation of the genetic algo-
implementing a new optimization problem in GENESIS can rithm resulted in program crashes on the computers that we used,
be quite time-consuming and error-prone. On the other hand, even for the examples that came with the software. Finally, as
GENESIS implements several powerful optimization algorithms we are not aware of any implementation of the AdExpIF model
(including customizable versions of a genetic algorithm and sim- for GENESIS, we could not run use case 5. With Neurofitter, we
ulated annealing) which can produce remarkably good results could run use cases 1–4, while it could not handle the intrinsic
even by today’s standards (the GENESIS implementations are numerical instability of the AdExpIF model, and could not com-
relatively old). Some forms of parallelization are also possible plete this optimization without crashing. On the tasks which were
through the PGENESIS module. GENESIS contains a single built- successfully solved by several tools, the resulting traces were com-
in cost function (a relatively sophisticated algorithm for matching pared through the mean squared error, and also based on spike
spike times); other error functions need to be added by hand. count in spiking models (Table 1).
Neurofitter is a general-purpose model optimization tool
which is in some ways similar to ours (Van Geit et al., 2007). 1. Recovering the correct conductance densities (Na, K, leak) of
However, Neurofitter does not have a GUI, and the definition of a single-compartment Hodgkin-Huxley model based on a sin-
problems needs to be done through a configuration file. It also gle (suprathreshold) step current injection: Three of the four
implements a variety of optimization algorithms, but only a sin- tools (Optimizer, NEURON, and GENESIS) found parame-
gle cost function (the PPTD method), which can be powerful in ters which resulted in very good fits to the data in terms of
certain problems, but may be totally inappropriate in other situ- spike counts, spike timings, and mean squared error, while
ations. Neurofitter also supports various forms of parallelization Neurofitter found a substantially worse solution (Table 1;
through the MPI protocol. Figure 9). However, it is interesting to note that the opti-
Finally, we note that there are some potentially desirable fea- mal parameters found by the programs vary significantly
tures which are not currently available in any of the above among them, and also deviate substantially from the orig-
software solutions (including ours). For instance, multi-objective inal values (Table 2). This highlights a fundamental issue
(rather than single-objective) optimization was found to be with the identifiability of the conductance density parame-
advantageous in the context of fitting a full range of mod- ters of the Hodgkin-Huxley model using this current injection
els to a diverse set of experimental data (Druckmann et al., protocol.
2007), but is not supported by either NEURON, GENESIS, or 2. Recovering some basic synaptic parameters from simulated
Neurofitter. Optimizer is also restricted to single-objective opti- voltage clamp recordings during synaptic stimulation in a
mization for the moment; however, as the inspyred package, one single-compartment model: Optimizer and GENESIS could
of the main optimization tools used by Optimizer, also supports solve this task essentially perfectly, both in terms of mean
multi-objective optimization, extending Optimizer to handle this squared error (Table 1), and in terms of recovering the true
class of problems will be relatively straightforward. values of the parameters (Table 3); Neuron’s solution was
also close, although slightly less accurate, while Neurofitter’s
PERFORMANCE COMPARISONS solution had a substantially larger error (Table 1; Figure 10).
We also wanted to compare, as much as possible, the quanti- 3. Fitting the densities of somatic voltage-gated channels
tative performance of different model optimization tools. We in a simplified (6-compartment) model to approximate
therefore attempted to implement the use cases presented earlier the somatic voltage response of a morphologically and

Table 1 | Comparison of the error in the best-fitting solution of different optimization software tools on the five problems defined in the Use
cases section.

Optimizer Neurofitter NEURON GENESIS

1—HH MSE (mV2 ) 0.0033 0.0438 7.32 × 10−4 0.0016


Spike count (28) 28 32 28 28
2—VC MSE (nA2 ) 1.73 × 10−7 0.0052 2.68 × 10−5 6.29 × 10−7
3—CA1 PC simple MSE (mV2 ) 0.0069 0.0125 0.0092 0.0028
Spike count (7) 7 10 6 7
4—CA1 PC morphology MSE (mV2 ) 3.10 × 10−5 3.09 × 10−4 3.29 × 10−5 3.58 × 10−5

Mean squared error (MSE) was measured in all cases. In problems involving spikes, the resulting spike counts are also shown (the original spike count is shown in
parentheses in the second column).

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 12


Friedrich et al. Software for fitting neuronal models

FIGURE 9 | Comparison of the performance of the four model fitting in the lowest error value achieved in each generation (for Optimizer and
tools on Use case 1 (fitting conductance densities in the Neurofitter) or after each 100 model evaluations (GENESIS). Errors are
Hodgkin-Huxley model). (A) Comparison of the resulting best traces with displayed here in arbitrary units, which are different across optimization tools,
the target trace. Insets show magnifications of spike shapes. (B–D) Changes reflecting differences in the choice of cost functions.

Table 2 | Comparison of the best-fitting parameter values with the Table 3 | Comparison of the best-fitting parameter values with the
original values in Use case 1. original values in Use case 2.

Parameter Original Optimizer Neurofitter GENESIS NEURON Parameter Original Optimizer Neurofitter GENESIS NEURON

gnabar_hh 0.12 0.4242 0.5014 0.2687 0.0968 tau1 (ms) 0.3 0.3006 0.0406 0.3002 0.2561
gkbar_hh 0.036 0.1010 0.1191 0.0714 0.0294 tau2 (ms) 3 2.9960 2.9940 2.9998 3.0438
gl_hh 0.000300 0.000769 0.000772 0.000313 0.000320 Weight (uS) 0.01 0.010002 0.01209 0.009997 0.010137
Delay (ms) 2 1.9783 0.3672 1.9863 2.0288
Conductance density values are given in S/cm2 .

biophysically detailed (CA1 PC) model to a somatic current optimization allowed for Optimizer whose result was shown
step. In order to make the comparison between the differ- in the Use cases section (Figure 6), here we ran Optimizer’s
ent programs as fair as possible, we allowed each program Evolutionary Optimization algorithm for only 100 generations
to run for a comparable number of iterations (approximately with 100 individuals each. On this difficult task, GENESIS
10,000); this also meant that, instead of the more extensive came up with the best solution in terms of mean squared error

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 13


Friedrich et al. Software for fitting neuronal models

FIGURE 10 | Comparison of the performance of the four model for Optimizer (B) and GENESIS (D) reflect mean squared error,
fitting tools on Use case 2 (fitting synaptic parameters in voltage measured in (nA)2 ; the value of the PPTD error function in Neurofitter
clamp). Figure layout and notation are similar to Figure 9. Error values is displayed in (C).

and spike timings (see Table 1), but Optimizer also found 4. Fitting the passive parameters of a morphologically detailed
a reasonably good solution (with a correct spike count, and CA1 pyramidal cell model to experimental data based on
a fairly good fit to the subthreshold range, but a worse fit a complex current clamp stimulus: Optimizer, NEURON,
to the actual spike times) (Figure 11). The solutions found and GENESIS all found approximately equally good solu-
by NEURON and Neurofitter were substantially worse, with tions (using a mean squared error function), and significantly
incorrect spike counts, spike timings, and spike shapes. The outperformed Neurofitter (which used PPTD) on this task
different results of the three tools which used global optimiza- (Figure 12).
tion algorithms probably (at least partially) reflect differences
in the cost functions used: Neurofitter used its only built-in In conclusion, Optimizer delivered the lowest or second low-
cost function (PPTD), GENESIS used a combination of mean est error (according to our measures) among the four programs
squared error and its built-in function (spkcmp) for compar- tested on all four test cases, and successfully solved a wide vari-
ing spike timings, while in Optimizer we used a combination ety of problems. While GENESIS could not handle all of these
of six features (mean squared error, spike count, ISI differ- problems (as it does not support integrate-and-fire type mod-
ences, AP amplitude, latency to first spike, and AHP depth). els), its simulated annealing algorithm performed very well on
This example also illustrates that a few properly selected fea- the remaining tasks. As we expected, NEURON’s local optimiza-
tures can result in a solution which is just as good (or better) tion algorithm provided good solutions when the number of
than one obtained using a larger number of features. parameters was small and the error function contained a single

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 14


Friedrich et al. Software for fitting neuronal models

FIGURE 11 | Comparison of the performance of the four model fitting shapes. Figure layout and notation are similar to Figure 9. Errors are in
tools on Use case 3 (fitting voltage traces from a detailed arbitrary units, which differ between panels (B–D), due to differences in the
compartmental model). Insets allow a better visual comparison of spike cost functions used.

well-defined minimum over a large region of the parameter in our research, and we are already using Optimizer in the lab-
space, but it performed significantly worse in high-dimensional oratory in several different projects. However, we also aim to
search spaces, which probably contained multiple local minima. provide a tool which is useful for the wider neuroscience commu-
Neurofitter’s generally poor performance came as a surprise to us nity (both the core community of computational neuroscientists
given its demonstrated ability to solve similar problems (Van Geit and those experimentalists who use modeling as an auxiliary
et al., 2007). However, it is quite possible (even likely) that a better method). Therefore, based on the feedback we receive, we intend
fit could have been achieved with any of these tools by fine-tuning to further improve the usability of the program, and also to keep
the settings of the optimization algorithms or by using a different adding features requested by the users. We envisage that some of
cost function, especially on the more complex tasks. this development will be handled by the core team of developers
(currently three persons), but also hope that the open and mod-
FUTURE DEVELOPMENT ular design of our software will encourage other researchers to
This paper describes only a snapshot of the development of contribute and add their favorite protocols, cost functions, and
our model optimization software. As we demonstrated above, optimization algorithms to Optimizer. We also encourage poten-
Optimizer is already a working piece of software with many use- tial users to send us further use cases, specifying the kinds of
ful functions. Initial development of the program was driven by model optimization problems that they need solve, so that we
the realization that no currently available neuronal optimization can tell them whether and how they can use Optimizer to solve
tool could handle the variety of problems that we encountered these problems, and to see how we should further extend the

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 15


Friedrich et al. Software for fitting neuronal models

FIGURE 12 | Comparison of the performance of the four whole fit. (B) More detailed view of the response to the short
parameter optimization tools on Use case 4 (fitting the input pulse. (C) Detailed view of the response to the long pulse.
parameters of a morphologically detailed passive multi- (D–F) Plots of the evolution of the lowest error. Error values in
compartmental model to experimental data). (A–C) Comparison of panels (D,F) reflect mean squared error, measured in (mV)2 , while
the best-fitting traces with the target trace. (A) Overview of the PPTD error is shown in panel (E).

capabilities of the program to make it more widely useful. Finally, target data (and corresponding simulation results) to more com-
as Optimizer is released under the GNU Lesser General Public plex data sets, possibly including (at the same time) time series
License, it can be used and potentially further developed in other (current, voltage, and other continuous variables), discrete events
projects. (such as spike times), and abstract (derived) features. This task
We have a long and growing list of improvements that we plan could be made easier by taking advantage of a Python-based
to make, and we will describe some of the most important items data representation framework such as the neo package (Garcia
here. First, as the uniqueness of our software comes mainly from et al., 2014). Third, we plan to add batch-processing capabili-
its convenient user interface, we plan to extend the GUI to sup- ties to the software (first using the command-line interface, but
port an even wider range of problems and options. In particular, eventually also through the GUI) so that, for instance, the same
as control of the simulations from the GUI is possible only for type of model (with different parameters) could be fitted auto-
internal simulators, we aim to support some additional popu- matically to data obtained from multiple cells. Finally, as model
lar simulators (in addition to NEURON) at this level. Adding optimization can be extremely time-consuming, we will look into
the simulation platform PyNN (Davison et al., 2008) would be different ways of parallelizing the process of model fitting. As a
a logical next step, as this would enable us to control all the first step, we will take advantage of the existing parallel capabilities
simulators (including NEST, Brian, and PCSIM) supported by of the optimization modules (inspyred and scipy) and possibly
PyNN. Second, we also plan to extend the range of possible the simulators themselves. We also plan to make the code within

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 16


Friedrich et al. Software for fitting neuronal models

Optimizer more efficient by vectorizing critical calculations (such Eichner, H., and Borst, A. (2011). Hands-on parameter search for neural
as the evaluation of the cost functions). simulations by a MIDI-controller. PLoS ONE 6:e27013. doi: 10.1371/jour-
nal.pone.0027013
Eppler, J. M., Helias, M., Muller, E., Diesmann, M., and Gewaltig, M.-O. (2008).
CONCLUSIONS PyNEST: a convenient interface to the NEST simulator. Front. Neuroinform.
In this article, we have described a novel tool for the optimiza- 2:12. doi: 10.3389/neuro.11.012.2008
Garcia, S., Guarino, D., Jaillet, F., Jennings, T., Pröpper, R., Rautenberg, P. L., et al.
tion of the parameters of neuronal models. This is a critical, but
(2014). Neo: an object model for handling electrophysiology data in multiple
also complex and often time-consuming step in the construction formats. Front. Neuroinform. 8:10. doi: 10.3389/fninf.2014.00010
of biologically relevant models of single neurons and networks. Gerstner, W., and Naud, R. (2009). Neuroscience. How good are neuron models?
Fitting appropriate models is also becoming an important tool Science 326, 379–380. doi: 10.1126/science.1181936
in the quantitative analysis of physiological data sets. However, Gleeson, P., Crook, S., Cannon, R. C., Hines, M. L., Billings, G. O., Farinella, M.,
et al. (2010). NeuroML: a language for describing data driven models of neu-
the results of model fitting can be heavily affected by techni-
rons and networks with a high degree of biological detail. PLoS Comput. Biol.
cal details such as the choice of the optimization algorithm, and 6:e1000815. doi: 10.1371/journal.pcbi.1000815
actually implementing model fitting has been cumbersome with Goodman, D. F. M., and Brette, R. (2009). The brian simulator. Front. Neurosci. 3,
previously existing tools. This is where we believe our software 192–197. doi: 10.3389/neuro.01.026.2009
can make a difference: by making available the power of some Gurkiewicz, M., and Korngreen, A. (2007). A numerical approach to ion channel
modelling using whole-cell voltage-clamp recordings and a genetic algorithm.
of the most advanced methods in model optimization through PLoS Comput. Biol. 3:e169. doi: 10.1371/journal.pcbi.0030169
an intuitive user interface, we hope to make it possible for a Hay, E., Hill, S., Schürmann, F., Markram, H., and Segev, I. (2011). Models of neo-
larger community of non-expert users to create better models and cortical layer 5b pyramidal cells capturing a wide range of dendritic and peri-
analyze data in a more efficient and consistent way. somatic active properties. PLoS Comput. Biol. 7:e1002107. doi: 10.1371/jour-
nal.pcbi.1002107
Hendrickson, E. B., Edgerton, J. R., and Jaeger, D. (2011). The use of auto-
ACKNOWLEDGMENTS mated parameter searches to improve ion channel kinetics for neural modeling.
We thank Miklós Szoboszlay and Zoltán Nusser for sharing J. Comput. Neurosci. 31, 329–346. doi: 10.1007/s10827-010-0312-x
their data and analysis scripts. Support from OTKA (K83251), Hines, M. L., Davison, A. P., and Muller, E. (2009). NEURON and python. Front.
Neuroinform. 3:1. doi: 10.3389/neuro.11.001.2009
ERC-2011-ADG-294313 (SERRACO), and the EU FP7 grant Huys, Q. J. M., Ahrens, M. B., and Paninski, L. (2006). Efficient estima-
no. 604102 (Human Brain Project) is gratefully acknowledged. tion of detailed single-neuron models. J. Neurophysiol. 96, 872–890. doi:
Michael Vella is funded by a Medical Research Council (MRC) 10.1152/jn.00079.2006
Capacity Building Studentship. Huys, Q. J. M., and Paninski, L. (2009). Smoothing of, and parameter estima-
tion from, noisy biophysical recordings. PLoS Comput. Biol. 5:e1000379. doi:
10.1371/journal.pcbi.1000379
REFERENCES Káli, S., and Freund, T. F. (2005). Distinct properties of two major excitatory inputs
Bahl, A., Stemmler, M. B., Herz, A. V. M., and Roth, A. (2012). Automated opti- to hippocampal pyramidal cells: a computational study. Eur. J. Neurosci. 22,
mization of a reduced layer 5 pyramidal cell model based on experimental data. 2027–2048. doi: 10.1111/j.1460-9568.2005.04406.x
J. Neurosci. Methods 210, 22–34. doi: 10.1016/j.jneumeth.2012.04.006 Keren, N., Peled, N., and Korngreen, A. (2005). Constraining compartmental mod-
Bower, J. M., and Beeman, D. (1998). The Book of GENESIS (2nd Edn.): Exploring els using multiple voltage recordings and genetic algorithms. J. Neurophysiol. 94,
Realistic Neural Models with the GEneral NEural SImulation System. New York, 3730–3742. doi: 10.1152/jn.00408.2005
NY: Springer-Verlag, Inc. Kirkpatrick, S., Gelatt, C. D. Jr., and Vecchi, M. P. (1983). Optimization by
Brent, R. P. (2002). Algorithms for Minimization Without Derivatives. Mineola, NY: simulated annealing. Science 220, 671–680. doi: 10.1126/science.220.4598.671
Dover Publications. Naud, R., Marcille, N., Clopath, C., and Gerstner, W. (2008). Firing patterns in the
Byrd, R. H., Lu, P., Nocedal, J., and Zhu, C. (1995). A limited memory algorithm adaptive exponential integrate-and-fire model. Biol. Cybern. 99, 335–347. doi:
for bound constrained optimization. SIAM J. Sci. Comput. 16, 1190–1208. doi: 10.1007/s00422-008-0264-7
10.1137/0916069 Nelder, J. A., and Mead, R. (1965). A simplex method for function minimization.
Carnevale, N. T., and Hines, M. L. (2006). The NEURON Book. Cambridge: Comput. J. 7, 308–313. doi: 10.1093/comjnl/7.4.308
Cambridge University Press. doi: 10.1017/CBO9780511541612 Poirazi, P., Brannon, T., and Mel, B. W. (2003). Arithmetic of subthreshold synap-
Cornelis, H., Rodriguez, A. L., Coop, A. D., and Bower, J. M. (2012). Python as tic summation in a model CA1 pyramidal cell. Neuron 37, 977–987. doi:
a federation tool for GENESIS 3.0. PLoS ONE 7:e29018. doi: 10.1371/jour- 10.1016/S0896-6273(03)00148-X
nal.pone.0029018 Rossant, C., Goodman, D. F. M., Fontaine, B., Platkiewicz, J., Magnusson, A. K.,
Davison, A. P., Brüderle, D., Eppler, J., Kremkow, J., Muller, E., Pecevski, D., et al. and Brette, R. (2011). Fitting neuron models to spike trains. Front. Neurosci.
(2008). PyNN: a common interface for neuronal network simulators. Front. 5:9. doi: 10.3389/fnins.2011.00009
Neuroinform. 2:11. doi: 10.3389/neuro.11.011.2008 Rossant, C., Goodman, D. F. M., Platkiewicz, J., and Brette, R. (2010). Automatic
De Schutter, E., and Bower, J. M. (1994a). An active membrane model of the cere- fitting of spiking neuron models to electrophysiological recordings. Front.
bellar Purkinje cell II. Simulation of synaptic responses. J. Neurophysiol. 71, Neuroinform. 4:2. doi: 10.3389/neuro.11.002.2010
401–419. Svensson, C.-M., Coombes, S., and Peirce, J. W. (2012). Using evolutionary algo-
De Schutter, E., and Bower, J. M. (1994b). An active membrane model of the cere- rithms for fitting high-dimensional models to neuronal data. Neuroinformatics
bellar Purkinje cell. I. Simulation of current clamps in slice. J. Neurophysiol. 71, 10, 199–218. doi: 10.1007/s12021-012-9140-7
375–400. Van Geit, W., Achard, P., and De Schutter, E. (2007). Neurofitter: a parameter
Druckmann, S., Banitt, Y., Gidon, A., Schürmann, F., Markram, H., and Segev, tuning package for a wide range of electrophysiological neuron models. Front.
I. (2007). A novel multiple objective optimization framework for constrain- Neuroinform. 1:1. doi: 10.3389/neuro.11.001.2007
ing conductance-based neuron models by experimental data. Front. Neurosci. Van Geit, W., De Schutter, E., and Achard, P. (2008). Automated neuron
1, 7–18. doi: 10.3389/neuro.01.1.1.001.2007 model optimization techniques: a review. Biol. Cybern. 99, 241–251. doi:
Druckmann, S., Berger, T. K., Hill, S., Schürmann, F., Markram, H., and Segev, 10.1007/s00422-008-0257-6
I. (2008). Evaluating automated parameter constraining procedures of neuron Vanier, M. C., and Bower, J. M. (1999). A comparative survey of automated
models by experimental and surrogate data. Biol. Cybern. 99, 371–379. doi: parameter-search methods for compartmental neural models. J. Comput.
10.1007/s00422-008-0269-2 Neurosci. 7, 149–171. doi: 10.1023/A:1008972005316

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 17


Friedrich et al. Software for fitting neuronal models

Vavoulis, D. V., Straub, V. A., Aston, J. A. D., and Feng, J. (2012). A self-organizing Citation: Friedrich P, Vella M, Gulyás AI, Freund TF and Káli S (2014) A flexi-
state-space-model approach for parameter estimation in hodgkin-huxley-type ble, interactive software tool for fitting the parameters of neuronal models. Front.
models of single neurons. PLoS Comput. Biol. 8:e1002401. doi: 10.1371/jour- Neuroinform. 8:63. doi: 10.3389/fninf.2014.00063
nal.pcbi.1002401 This article was submitted to the journal Frontiers in Neuroinformatics.
Copyright © 2014 Friedrich, Vella, Gulyás, Freund and Káli. This is an open-access
Conflict of Interest Statement: The authors declare that the research was con- article distributed under the terms of the Creative Commons Attribution License
ducted in the absence of any commercial or financial relationships that could be (CC BY). The use, distribution or reproduction in other forums is permitted, provided
construed as a potential conflict of interest. the original author(s) or licensor are credited and that the original publication in this
journal is cited, in accordance with accepted academic practice. No use, distribution or
Received: 01 November 2013; accepted: 11 June 2014; published online: 10 July 2014. reproduction is permitted which does not comply with these terms.

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 18


Friedrich et al. Software for fitting neuronal models

APPENDIX: EXTENDING OPTIMIZER (TECHNICAL DETAILS) the parameters optimized to the appropriate parameters of the
USING AN EXTERNAL SIMULATOR NEURON model. This function can be created via the template
In the model selection layer, the external simulator option can be provided by the GUI or loaded from a previously written file.
selected. In this case a command line statement should be entered The function must use the Python syntax of NEURON (with
into the appropriate box, which consists of the following: one exception: there can be no empty lines in the section which
defines the names of the parameters) to access model elements,
– the command that calls the simulator and allows the user to optimize a combination of model param-
– the name of the model file eters or set parameters to a specific value, not known during
– options to the simulator (optional) the creation of the model (see use case 4). If such a function is
– as the last parameter, the number of parameters subject to defined, the GUI will hide the list of parameters, and direct selec-
optimization tion of model parameters is no longer allowed. The function is
In order to use an external simulator, the model file must checked against basic syntax errors, but we strongly recommend
be modified to contain the stimulation and recording protocol to double-check the function as other errors will be detected only
and to be able to take input parameters from a text file called during runtime.
“params.param” located in the base directory, which should con-
tain one parameter value on each line. The model must also be LOADING A TIME-VARYING STIMULUS
able to write the traces resulting from running the simulation to The stimulation settings layer offers an option to load a time-
a file called “trace.dat” located in the base directory, in a text file varying stimulus, which will use the play method of NEURON
containing TAB-separated numerical values, with each trace in a to use the values in the given file to stimulate the model. The
separate column. file should be a simple text file containing only the stimulation
values. The number of values must be equal to the number of
USER FUNCTION sample points generated by the simulation (one can calculate
Also in the model selection layer, the user can define a function this by dividing the length of the simulation by the integration
which will be executed for every set of parameters, and maps step size).

Frontiers in Neuroinformatics www.frontiersin.org July 2014 | Volume 8 | Article 63 | 19

You might also like