Due to their popularity and widespread utility, discrete event simulators have been the subject of much research into their efficient design and execution (surveyed in [14,6,17,8]). From a systems perspective, researchers have built many types of simulation kernels and libraries. And, from a modelling perspective, researchers have designed numerous languages specifically for simulation. We introduce each of these three alternative simulator construction approaches below.
Simulation kernels, including systems such as the seminal TimeWarp OS , transparently create a convenient simulation time abstraction. Such systems operate at the process boundary: they control process scheduling, inter-process communication and the system clock in a manner that transparently virtualizes time for its applications. The process boundary provides much flexibility. Such systems can transparently support concurrent execution of simulation applications and even speculative and distributed execution. Furthermore, by mimicking the system call interface of a conventional operating system, one can run simulations comprised of standard, unmodified programs.
Unfortunately, the process boundary is also a source of inefficiency. Simulation libraries, such as Compose  and others, trade the transparency afforded by process-level isolation in favor of increased efficiency. For example, by moving from an explicit to a logical process model one can eliminate the context-switching and marshalling costs required for event dispatch and thus improve simulation throughput. However, various simulation functions that existed within the kernel, such as process scheduling and message passing, must then be provided in user-space. In essence, the simulation kernel and its applications are merged into a single monolithic process that contains both the simulation model as well as its own execution engine. This imposes an internal structure within the simulation program that is not explicitly enforced. Morever, the application code becomes increasingly complex and is littered with library calls and callbacks. In practice, this obscures possible high-level optimisations and limits the degree of execution sophistication.
Simulation languages, such as Simula , Parsec  and many others, are designed to simplify simulation development and to explicitly enforce the correctness of monolithic simulation programs. Simulation languages often introduce simulation time execution semantics, which allow for parallel and speculative execution transparently, without any program modification. Such languages often also introduce handy constructs, such as messages and entities, that can be used to logically partition the application state. Constraints on simulation state and on event causality can be statically enforced by the compiler and they also permit important static and dynamic optimizations. An interesting recent example of a language-based simulation optimization is that of reducing the overhead of speculative simulation execution through the use of reverse computations . While it is perfectly possible to provide forward and reverse computations for each simulation event, doing so manually without the aid of a special reverse-compiler is not realistic. Unfortunately, simulation languages are, by definition, domain-specific and therefore suffer from specialization. They usually lack modern features and portability and also lag in terms of general-purpose optimizations and implementation efficiency. This only perpetuates the small user-base problem. Perhaps the most significant barrier to adoption by the broader community is that the simulation programs themselves need to be rewritten.
In summary, each of these three fundamental approaches to simulation construction trades off a different desirable property, as shown in Table 1. Thus, despite the plethora of ideas and contributions to theory, languages and systems, the simulation community has repeatedly asked itself ``will the field survive?'' under a perception that it had ``failed to make a significant impact'' on the broader community (see [7,15,2] and others). For example, even though a number of parallel discrete event simulation environments have been shown to scale to networks of beyond nodes, slow sequential network simulators remain the norm . In particular, most published ad hoc network results are based on simulations of few nodes (usually less than 500 nodes), for a short duration and over a limited field. Larger simulations usually compromise on simulation detail or duration, reduce node density or restrict node mobility.
These observations influenced the design and direction of JiST. Specifically, we decided from the outset to:
Instead, we propose a new way of building simulators: to bring simulation semantics to a modern and popular virtual machine. JiST, which stands for Java in Simuation Time, is a new discrete event simulation system built along these principles, integrating prior systems and languages approaches. Specifically, the key motivation behind JiST is to create a simulation system that can execute discrete event simulations both efficiently and transparently, yet to achieve this using only a standard systems language and runtime, where:
These three attributes - the last one in particular - highlight an important distinction between JiST and previous simulation systems in that the simulation code that runs on JiST need not be written in a domain-specific language invented specifically for writing simulations, nor need it be littered with special-purpose system calls and call-backs to support runtime simulation functionality. Instead, JiST transparently introduces simulation time execution semantics to simulation programs written in plain Java and they are executed over an unmodified Java virtual machine. JiST converts a virtual machine into a simulation system that is both flexible and surprisingly efficient and scalable.