0% found this document useful (0 votes)
42 views

An Overview of PythonPDEVS

The document provides an overview of PythonPDEVS, a family of DEVS simulation kernels implemented in Python. It discusses the motivations for using PythonPDEVS, including using Python as a high-level language for modeling and the ability to offer competitive performance compared to low-level kernels. Key features of PythonPDEVS are presented, including support for different modes of execution and many standard DEVS features.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

An Overview of PythonPDEVS

The document provides an overview of PythonPDEVS, a family of DEVS simulation kernels implemented in Python. It discusses the motivations for using PythonPDEVS, including using Python as a high-level language for modeling and the ability to offer competitive performance compared to low-level kernels. Key features of PythonPDEVS are presented, including support for different modes of execution and many standard DEVS features.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

An overview of PythonPDEVS

Yentl Van Tendeloo1 Hans Vangheluwe1,2


1
University of Antwerp, Belgium
2
McGill University, Canada

[email protected]
[email protected]

Abstract: they have lots of features and are easy to use,


We present an overview of PythonPDEVS, a family of performance is not one of their top priorities.
DEVS simulation kernels. While a plethora of DEVS
simulation kernels exist nowadays, we believe that there This results in slow simulations compared to
is a gap between low-level, compiled simulation kernels, lightweight, compiled simulation kernels, mak-
and high-level, interpreted simulation kernels. Python-
PDEVS fills this gap, by providing users with a high- ing them unsuited for large-scale DEVS mod-
level, interpreted simulation tool that offers features sim- els. There are thus two groups of simulation
ilar to those found in other high-level tools, while offer-
ing comparable performance to the low-level tools. In kernels: lightweight, compiled, and efficient on
this paper, we focus on the three main motivations for the one hand, and high-level, feature-rich, and in-
use of PythonPDEVS: (1) the use of Python as a high- efficient on the other hand. We believe that
level, interpreted language, which is also used by mod-
ellers to create their models, (2) a rich feature set, com- PythonPDEVS can fill in this gap.
parable to other high-level tools, and (3) decent simula-
tion performance, comparable to other low-level tools. The core characteristics of PythonPDEVS are
PythonPDEVS therefore aims at users new to DEVS
modelling and simulation, or programming, while still of- presented next. First, it is implemented in
fering competitive performance. Python, a high-level, interpreted language.
Keywords: Models are written in the same language, result-
Python, DEVS, tool, distributed simulation, perfor- ing in completely interpreted execution, avoid-
mance ing compilation. This makes it faster to pro-
totype simulation models, as it reduces the
1 Introduction turnaround time. Second, PythonPDEVS offers
a wide set of features, similar to currently avail-
We present an overview of PythonPDEVS, a able simulation tools. While PythonPDEVS it-
family of DEVS simulation kernels. While self is only a simulation kernel, tools like DE-
a plethora of DEVS simulation kernels exist VSimPy [7] and the AToMPM DEVSDebug-
nowadays, we feel that there still exists a gap. ger [8] extend on it to provide even more fea-
Most DEVS simulation kernels, like adevs [1], tures. Most features are supported out-of-the-
vle [2], or CD++ [3], are implemented in low- box, without any custom code. Third, Python-
level, compiled languages, such as C++. These PDEVS offers decent performance. Perfor-
languages are a logical choice for the imple- mance is not at the level of the lightweight,
mentation of a simulation kernel, due to the compiled simulation kernels, but is significantly
resulting efficiency. But while efficient dur- faster than other feature-rich, high-level tools.
ing simulation, modellers are required to write
We will take a deeper look at the latter two:
low-level code as well, which often requires
supported features, and the performance tweaks
compilation. This results in slower turnaround
to achieve this performance. Our previous
than if an interpreted programming language
work [9, 10] investigated some features and
were used [4]. More feature-rich and high-
their performance in detail. Here, we present
level simulation kernels certainly exist, such
an overview of our complete feature set.
as MS4Me [5] and X-S-Y [6]. And while
as-fast-as-possible
2 Features
scaled realtime
4 (scale > 1)

Simulated time
realtime
PythonPDEVS supports a wide set of features, 3 (scale = 1)
similar to those found in other high-level DEVS
simulation tools. While there is no graphical 2
front-end, some work has been done on this
scaled realtime
by [7, 8, 11]. Many of the features supported by 1 (scale < 1)
other tools, and more, become available through
the use of these extensions. 0
0 1 2 3 4
Table 1 presents an overview of the features,
Wallclock time
and the cases in which they are supported.
PythonPDEVS supports three modes of ex- Figure 1: Realtime versus as-fast-as-possible
ecution: Sequential As-fast-as-possible (Seq. simulation.
AFAP), Sequential Realtime (Seq. RT), and
Distributed As-fast-as-possible (Dist. AFAP).
Note that distributed realtime is not supported at Table 1: Supported features in each situations.
all. Due to the different characteristics of each

Dist. AFAP
Seq. AFAP
mode of execution, some features are selec-

Seq. RT
tively available. For the not implemented fea-
tures, we could not find a useful enough appli-
cation to warrant the significant effort required.
Classic DEVS Y Y N
For distributed simulation, Time Warp [12] Parallel DEVS Y Y Y
is used to provide optimistic synchronization. Dynamic Structure DEVS Y Y N
With Time Warp, each distributed node will Tracing Y Y Y
simulate ahead in time, in the assumption that Checkpointing Y N Y
no causality errors will occur. Should such an Nested simulation Y Y N
error occur (i.e., a message from the past ar- Termination condition Y Y Y
rives), the simulation state is rolled back to the Livelock detection Y Y Y
point in time where the message was supposed Transfer functions Y Y Y
to arrive. This allows for parallelism, as all
nodes can simulate independently, but incurs as the system allows. Realtime simulation is
two kinds of overhead: a fixed one, to store mainly used when coupling the model with
each intermediate simulation state, and a vari- other realtime components, whereas as-fast-as-
able one, to roll back the simulation state when possible simulation is used when only the sim-
a causality error is detected. ulation results matter.

In realtime simulation, the simulated time is The remainder of this section will briefly de-
coupled with the wall-clock time, as shown in scribe each feature, as well as why it is relevant
Figure 1. This means that in a single sec- for users to have this feature.
ond of wall-clock time, the simulation will also
progress exactly a single second. Scaled vari- Classic DEVS Classic DEVS refers to the origi-
ants of this are possible, for example that each nal DEVS formalism defined by [13]. It is still a
second in wall-clock time causes a progres- viable formalism, and is still preferred by some
sion of four seconds in simulated time. This people due to its simplicity. There is also not
is in contrast to as-fast-as-possible simulation, that much opportunity for parallelism and effi-
where the simulation simply progresses as fast cient execution. For performance reasons, most
tools no longer support Classic DEVS, however. higher the chance that it will be abruptly termi-
Due to difficulty with global synchronization, nated due to hardware failure. Distributed sim-
there is also no distributed variant of Classic ulations are even more prone to hardware fail-
DEVS implemented in PythonPDEVS. ure, as they also introduce network communi-
cation. Especially for such big simulations, it
Parallel DEVS Parallel DEVS [14] is an exten- is important to be able to continue a simulation
sion of Classic DEVS, which, as the name im- after it was abruptly terminated. This is pos-
plies, offers more options for parallel execution. sible through checkpointing, where the current
Apart from the prospect of parallel execution, simulation state is periodically stored. Should
there are also some additional optimizations in simulation terminate abruptly, it is possible to
place that allow for faster execution, even dur- continue simulation, without any loss of infor-
ing sequential execution. It is now becoming mation, by restoring that snapshot. This feature
the default DEVS formalism implemented by is not supported in realtime simulation, as stor-
tools, and is also the default in PythonPDEVS. ing the current simulation state introduces sig-
nificant and unpredictable delays in the simula-
Dynamic Structure DEVS Dynamic Structure tion. Furthermore, it is difficult to conceive the
DEVS [15] is another extension of DEVS (both semantics of restoring a crashed realtime sim-
Classic DEVS and Parallel DEVS), where the ulation: the linear relation between wall-clock
model can be reconfigured at runtime. Some time and simulation time is broken.
models, such as those where entities are created
and destroyed during simulation, lend them- Nested simulation In case the execution of the
selves much better to these cases, and are there- current model depends on the simulation of an-
fore best expressed using Dynamic Structure other model (possibly an abstracted model of
DEVS. PythonPDEVS does not support Dy- the current model), it is useful to allow for
namic Structure DEVS during distributed sim- nested simulation. The currently running sim-
ulation, since this opens up the possibility for ulation is suspended, and the nested simulation
connections to change as well, which can have is started. After simulation, the result of the
catastrophic results in optimistic synchroniza- nested simulation is used to define some aspects
tion if not handled well. of the currently running simulation. Python-
PDEVS supports this feature thanks to a clear
Tracing One of the most important artifacts of design, which avoids the use of global and static
simulation is the simulation trace, which con- variables. For distributed simulation, it is possi-
tains the simulation results. Different users ble to nest a sequential simulation inside a dis-
need different traces: some prefer textual traces, tributed simulation, but it is not possible to nest
whereas others want visual traces (e.g., plots of a distributed simulation in another simulation.
the evolution of a value). In PythonPDEVS, While not impossible to implement, synchro-
multiple tracers can be plugged in, which are in- nization overhead is significant, and would re-
voked at every transition that needs to be traced. quire seperate synchronization algorithms.
Several are provided out-of-the-box, with the
most helpful one being a normal textual trace. Termination condition While most simulation
Visual tracers are provided as well. Users can kernels nowadays support the use of a termi-
easily define their own tracers, which are in- nation time, PythonPDEVS also supports the
voked at every simulation step. use of a termination condition. In this case, a
special function is invoked before each simula-
Checkpointing Simulation of big models can tion step, which determines whether simulation
take a long time, even using a fast simula- should terminate or not. This termination func-
tion kernel. The longer a simulation takes, the tion has access to the current simulation state
and the current simulation time (to allow for PythonPDEVS therefore contains many inter-
time-based termination). This causes a signif- nal optimizations, mainly focused on the use
icant overhead in the general case, but offers of “leaky abstraction”. DEVS models are op-
more possibilities to the user. For example in tionally augmented with domain-specific hints,
design space exploration, we want to quickly which offer domain-specific knowledge about
prune models that don’t fulfill some basic re- the model to the simulation kernel. Normally,
quirements, which is only possible by checking a DEVS simulation kernel is unaware of the
these conditions at simulation time. domain of the model it is simulating, and can
therefore not use more efficient, but less gen-
Livelock detection One of the problems that re- eral, algorithms.
main in the DEVS formalism, is the possibil-
Due to space restrictions, we did not include
ity of a time advance equal to zero. While
performance results here, but we refer to previ-
this is not a fundamental problem, and is neces-
ous work [9, 10]. Using intrusive features (e.g.,
sary for some situations, it is possible that they
tracing, checkpointing) affects performance.
form a loop. In such a loop, simulation live-
locks as the simulation never progresses in sim- Table 2 shows an overview of our applied opti-
ulated time. Since simulation time no longer mizations, and when they are applicable. Some
progresses, simulation of the model will never optimizations are not applicable in all modes
terminate, even though models keep being ex- of execution, as some focus on the problems
ecuted. Static analysis is difficult, if not im- arising from distributed simulation. Again, dis-
possible, since the time advance depends on the tributed realtime simulation is not supported
current simulation state, which is unknown be- and thus not shown.
forehand. PythonPDEVS monitors model ex-
ecution, and aborts execution if the simulation Some of these optimizations are also imple-
time has not increased after a sufficient number mented in other DEVS simulation tools, such
of transitions. This method sometimes marks as direct connection [16].
fine models as erroneous (i.e., allows for false
positives), but will certainly terminate a locked
simulation (i.e., no false negatives).
Table 2: Performance optimizations applied in
Transfer functions Despite the fact that all vari- each situations.
ants of the DEVS formalism define the possi-
Dist. AFAP
Seq. AFAP

bility for transfer functions, only several simu-


Seq. RT

lation tools actually implement it. The use of


transfer functions eases the reuse of models in
a different context, where a conversion between
different types needs to happen. Due to the in- Direct connect Y Y Y
efficiencies caused by a naive implementation, Single loop Y Y Y
and the rare use by the community, many sim- Termination time Y Y Y
ulation kernels neglect to implement this aspect No transfers Y Y Y
of the formalism. Scheduler Y Y Y
Migration N N Y
3 Performance Allocation N N Y
Memoization Y Y Y
State copy hints N N Y
As we try to differentiate ourself from other
Event copy hints Y Y Y
feature-rich and high-level DEVS tools, we also
focus on improving simulation performance.
3.1 General optimizations Listing 1: Single loop simulation algorithm
1 direct connection
We first start with optimizations to the simula- 2 initialize event list
3 while not done:
tion algorithm itself, which are applicable in all 4 pop imminent models from event list
situations. These optimizations are hardcoded 5 for each imminent model:
6 mark model for internal transition
and there is no way for the modeller to influ- 7 collect output
ence them. 8 for each output port in output:
9 for each connected input port:
10 append event to model input
11 mark model for external transition
Direct connect At the start of simulation, all
12 for each marked model:
connections are resolved to direct connections 13 if marked for internal and external:
14 invoke confluent transition
between two atomic models. Instead of hav- 15 else if marked for internal:
ing intermediate Coupled DEVS models, all 16 invoke internal transition
17 else if marked for external:
of them are removed and links are made di- 18 invoke external transition
rect, thus reducing all intermediate algorithms 19 compute time advance
20 push new time on event list
to pass around the events. This process is called
direct connection [17] and effectively reduces
the hierarchy to a single Coupled DEVS model,
with all Atomic DEVS models being its direct proach, we avoid all hierarchy (through the
children. While this process might seem sim- use of direct connection) but also all simula-
ple, special care must be taken when executed in tion messages, which are instead exchanged for
combination with several of our features. With method calls. During initialization, all mod-
Classic DEVS, the select function reasons about els are gathered, and a single event list is con-
only a single level, and is defined in terms of structed. From this event list, imminent mod-
coupled DEVS models. This optimization is an els are popped, which are directly invoked with
implementation detail, and should therefore not the commands to execute, instead of having to
be visible to the select function, which should be retrieved from the hierarchy. In combination
still reason about the (no longer existing) cou- with direct connection, this removes hierarchy
pled DEVS models. With Dynamic Structure completely, as models are directly instructed
DEVS, similar problems occur because runtime from the main simulation loop. This causes a
modifications should be applied to the origi- significant decrease of simulation overhead, as
nal structure, and not to the direct connected the loop becomes much tighter and messages
structure. Therefore, after modifications are are avoided altogether. Apart from removing
made, the direct connect algorithm is ran again overhead, it simplifies the simulation algorithm,
to refresh all connections. In a distributed as there is no need for the processing of simula-
simulation, care should also be taken to make tion messages, nor Coupled DEVS models. In-
links correct, even between different simulation stead of a set of decoupled simulation message
nodes. Furthermore, each node should still have processing algorithms, PythonPDEVS uses a
a single Coupled DEVS model, without there single loop as shown in Listing 1.
being a single “global” Coupled DEVS model.
Finally, transfer functions should also chain all Termination time While one of the features of
calls that would normally be made through the PythonPDEVS is the use of a termination con-
path the event follows from its source to its des- dition, this is not an efficient solution. In fact,
tination. most of the time, a point in simulated time suf-
fices to determine if a simulation should ter-
Single loop Instead of closely following the ab- minate. Executing a function is rather slow in
stract simulator [18], PythonPDEVS takes a Python, so it is more efficient to explicitly in-
significantly different approach. In our ap- line the comparison of the simulation time with
the termination time. Termination conditions Migration In distributed simulation, perfor-
are still supported, but if a termination time is mance depends on the distribution of the com-
defined instead, the inlined code is used exclu- putational load. While the modeller might have
sively. Certainly in distributed simulation, us- a good idea of how load is distributed, it might
ing a termination time provides vast improve- change throughout simulation. PythonPDEVS
ments in simulation time. allows users to specify migration strategies as
hints to the simulation kernel. The simulation
No transfers Yet another feature of Python- kernel uses this strategy to migrate parts of the
PDEVS, which impacts performance, are trans- model between different nodes, if necessary.
fer functions. Again, however, they are seldom This mechanism is used to equalize the load
used in practice, despite their implementation over the nodes as simulation progresses. There
causing a significant overhead due to the fre- are again activity extensions to this [20], which
quent invocation of a function, which is most provide information on the measured load to the
of the time the identity function. To avoid this migration strategy. The simulation kernel opti-
slow path, the connection resolution phase of mizes for the evolution of the simulation state
direct connection optimizes out identity func- (i.e., the future), instead of the past state. Apart
tions. If a transfer function is detected on the from load measured in CPU time, users can pro-
path, only the explicitly defined functions are vide further hints to the kernel on what to mea-
chained, as the implicit ones are again the iden- sure, which could even be a domain-specific no-
tity function. tion (e.g., amount of cars on a road).

3.2 Simulation hints Allocation While migration solves the problem


of load distribution during simulation, it might
The remainder of the optimizations are focused not even be possible to find a good initial allo-
on the modeller providing additional informa- cation, or it might be too difficult to set it up
tion to the simulation kernel. DEVS models are during model initialization. PythonPDEVS al-
augmented with additional information, but still lows users to define an allocation strategy [10],
remain valid DEVS models nonetheless. which is invoked on the completely initialized
model. Users can then specify the initial allo-
Scheduler Many simulation kernels nowadays cation based on the initialized model, instead of
hardcode their event queue implementation, or on a partially constructed model. The allocation
scheduler, which determines which models are strategy can also be extended using activity ex-
imminent at a specific point in time. While tensions [20], which allows for monitoring of
hardcoding this component provides some per- an initial run of the simulation. This initial run
formance benefits, the ideal data structure is kind of like a sample run of the first few sim-
depends on the model being simulated [9]. ulation steps. Load distribution and exchanged
PythonPDEVS uses modular schedulers, which messages are monitored, and activity values are
allows users to choose the most appropriate data passed on to the allocation strategy, which uses
structure for their model. In distributed simu- this data to optimize the model distribution for
lation, it is even possible to use different data the remainder of the simulation.
structures at different nodes. This feature is a
hint to the simulation kernel, telling it how to Memoization Optimistic synchronization using
efficiently manage its data. There is an activ- Time Warp incurred a variable overhead, due to
ity extensions for this [19], which provides a the need to rollback simulation state in case of
polymorphic data structure: the data structure causality errors. The main cost is not restor-
monitors accesses to the data structure, and op- ing the simulation state, but repeating the same
timize itself for the detected pattern. (or similar) simulation as before. As only sev-
eral models are influenced by the arrival of an the copy. This user-defined function is again a
event, there is no need to recompute all models kind of hint to the simulation kernel, which al-
the node that was rolled back. With memoiza- lows it to stay conform to the DEVS formalism
tion, the state of atomic models is stored before while being relatively efficient.
and after execution of the transition function.
When a rollback occurs, these states are not dis- 4 Conclusions and future work
carded, but placed in a queue. If the model is
executed again, we can potentially reuse these We presented a brief overview of the main fea-
saved states to avoid the execution of the tran- tures of PythonPDEVS, as well as the perfor-
sition function. This requires some hints to the mance improvements which make simulation
simulation kernel, as the simulation kernel has sufficiently fast. Through the combination of a
no way to know whether or not two simulation high-level, interpreted programming language,
states are equal. The modeler is required to pro- a feature-rich simulation tool, and decent per-
vide a comparison function between two states. formance, we believe that PythonPDEVS fills
the gap which currently separates efficient, but
State copy hints The fixed overhead of opti- low-level, simulation tools from the high-level,
mistic synchronization is caused by saving all but inefficient, simulation tools. This makes
simulation states, in order to be able to restore PythonPDEVS ideal for people who want a
them if necessary. General ways of copying lightweight and efficient DEVS simulation ker-
states, such as serialization or built-in deepcopy nel, with low turnaround times, while still
functions, have the disadvantage that they have having access to functionality commonly only
to handle many corner cases as well, and are found in heavyweight tools. We achieve this by
therefore not that efficient. By providing a more having an implementation in Python, which de-
specific copy algorithm, telling the simulation creases development time of both the simula-
kernel how to make a copy of the current simu- tion kernel and the models, but also through the
lation state, this overhead can be partially miti- addition of domain-specific hints, which boosts
gated. simulation performance.

Event copy hints When an atomic model creates In future work, we will consider usability of
events, care should be taken not to break mod- our tool, going further in the direction of the
ularity, as these events potentially contain ref- AToMPM DEVSDebugger [8]. We will focus
erences or pointers to the state of other mod- our efforts on five aspects: (1) modelling and
els. The host language (e.g., Python or C++) simulation environment, allowing for easy cre-
has no way of knowing that this is not allowed ation and simulation of models, (2) library of
in DEVS, and will therefore allow these oper- DEVS models, which allows the sharing and
ations. Such tricks, however, are in violation reuse of DEVS models, (3) debugging environ-
with the DEVS formalism, and should be con- ment, which transposes most of the features of
sidered abuse of the host language. In Python- code debugging to the world of model debug-
PDEVS, events are therefore copied by default, ging, (4) experiment design environment, al-
such that each model receives a seperate copy lowing the graphical definition of experiments
of the event. While this is nice to have for peo- as well, and (5) efficient simulation, making the
ple new to DEVS, it causes performance prob- tool applicable in more situations.
lems in both time and space. Therefore, there
Acknowledgments :
are three main options in PythonPDEVS: either This work was partly funded by a PhD fellowship from
the messages are naively copied (default), either the Research Foundation - Flanders (FWO). Partial sup-
port by the Flanders Make strategic research centre for
they are not copied at all (for performance), or the manufacturing industry is also gratefully acknowl-
a user-specified function is invoked to perform edged.
References 2015 Spring Simulation Multiconference,
ser. SpringSim ’15. Society for Com-
[1] J. J. Nutaro, “adevs,” puter Simulation International, 2015, pp.
http://www.ornl.gov/ 1qn/adevs/, 2015. 844–851.
[2] G. Quesnel, R. Duboz, E. Ramat, and [11] B. Barroca, S. Mustafiz, S. Van Mierlo,
M. K. Traoré, “VLE: a multimodeling and and H. Vangheluwe, “Integrating a Neu-
simulation environment,” in Proceedings tral Action Language in a DEVS Mod-
of the 2007 summer computer simulation elling Environment,” in Proceedings of
conference, 2007, pp. 367–374. the 8th International ICST Conference
[3] G. Wainer, “CD++: a toolkit to develop on Simulation Tools and Techniques, ser.
DEVS models,” Software: Practice and SIMUTools ’15, 2015.
Experience, vol. 32, no. 13, pp. 1261– [12] R. M. Fujimoto, Parallel and Distribution
1306, 2002. Simulation Systems, 1st ed. John Wiley
[4] J. K. Ousterhout, “Scripting: Higher- & Sons, Inc., 1999.
Level Programming for the 21st Century,” [13] B. P. Zeigler, H. Praehofer, and T. G.
Computer, vol. 31, no. 3, pp. 23–30, 1998. Kim, Theory of Modeling and Simulation,
[5] C. Seo, B. P. Zeigler, R. Coop, and 2nd ed. Academic Press, 2000.
D. Kim, “DEVS modeling and simulation [14] A. C. H. Chow and B. P. Zeigler, “Paral-
methodology with MS4Me software,” in lel DEVS: a parallel, hierarchical, modu-
Symposium on Theory of Modeling and lar, modeling formalism,” in Proceedings
Simulation - DEVS (TMS/DEVS), 2013. of the 26th conference on Winter simula-
[6] M. H. Hwang, “X-S-Y,” tion, 1994, pp. 716–722.
https://code.google.com/p/x-s-y/, 2012. [15] F. J. Barros, “Modeling formalisms for dy-
[7] L. Capocchi, J. F. Santucci, B. Poggi, and namic structure systems,” ACM Transac-
C. Nicolai, “DEVSimPy: A collaborative tions on Modeling and Computer Simula-
python software for modeling and simula- tion, vol. 7, pp. 501–515, 1997.
tion of DEVS systems,” in Workshop on [16] A. Muzy and J. J. Nutaro, “Algorithms
Enabling Technologies: Infrastructure for for efficient implementations of the DEVS
Collaborative Enterprises, 2011, pp. 170– & DSDEVS abstract simulators,” in 1st
175. Open International Conference on Mod-
[8] S. Van Mierlo, Y. Van Tendeloo, B. Bar- eling and Simulation (OICMS), 2005, pp.
roca, S. Mustafiz, and H. Vangheluwe, 273–279.
“Explicit Modelling of a Parallel DEVS [17] B. Chen and H. Vangheluwe, “Symbolic
Experimentation Environment,” in Pro- flattening of DEVS models,” in Sum-
ceedings of the 2015 Spring Simulation mer Simulation Multiconference, 2010,
Multiconference, ser. SpringSim ’15. So- pp. 209–218.
ciety for Computer Simulation Interna- [18] A. C. H. Chow, B. P. Zeigler, and D. H.
tional, 2015, pp. 860–867. Kim, “Abstract simulator for the paral-
[9] Y. Van Tendeloo and H. Vangheluwe, lel DEVS formalism,” in AI, Simulation,
“The modular architecture of the and Planning in High Autonomy Systems,
Python(P)DEVS simulation kernel,” 1994, pp. 157–163.
in Proceedings of the 2014 Symposium [19] Y. Van Tendeloo, “Activity-aware DEVS
on Theory of Modeling and Simulation - simulation,” Master’s thesis, University of
DEVS, 2014, pp. 387–392. Antwerp, Antwerp, Belgium, 2014.
[10] Y. Van Tendeloo and H. Vangheluwe, [20] Y. Van Tendeloo and H. Vangheluwe, “Ac-
“PythonPDEVS: a distributed Parallel tivity in PythonPDEVS,” in Proceedings
DEVS simulator,” in Proceedings of the of ACTIMS 2014, 2014.

You might also like