Version 3 (modified by mj41, 5 years ago)

see also

Possible Lorito Designs

Opcodes

Lorito is implemented as a fixed set of low-level opcodes in 3-argument form which can be trivially converted to jit instructions. All other ops (PIR, dynops, etc) are implemented in terms of these Lorito ops. Once Lorito is implemented, much of Parrot's core code can be rewritten in something that gets compiled down to Lorito.

Pro

  • easy to convert to JIT instructions
  • easy to provide runtime code generation
  • whole-program optimization is possible for all subsystems written in Lorito - cotto
  • such ops would be easier to translate into C for non-jit-capable systems or builds - cotto
  • security privileges can be implemented on a per-opcode basis, in case we ever get around to that - cotto
  • bytecode validation is simpler - cotto

Con

  • how are ops defined?
    • likely inspired by features/mindsets provided by supported JITs
    • if the set of supported JITs is large, ops will cater to the lowest common denominator
      • Counterpoint: This is an issue of API design. If we understand the non-essential capabilities we want to support (e.g. optimization passes, etc), we can design the API so that such capabilities can be exploited but not required. - cotto
    • if LLVM is the only supported JIT (initial plan), ops will tend to mirror LLVM ops, and likely be difficult to work with to support other JIT systems
      • Counterpoint: This is an implementation detail. Once we start designing and implementing Lorito, some of us can familiarize ourselves with other JIT systems to ensure that Lorito can also support them. - cotto
  • writing the entire VM in low-level ops is not desirable from both readability and writability perspectives. Most of the VM will be opaque to the JIT, which is bad for inlining.
  • requiring people to write ops at all will create a pressure to add abstractions
    • Counterpoint: We don't want most people to write Lorito code directly. We *want* those abstractions so that most of Parrot can be written in PIR, NQP or even a low-runtime HLL, since it'll all compile down to Lorito ops. - cotto

CLang

The VM is exposed to the JIT system by compiling it to a JIT-visible form using a C-to-JIT compiler such as CLang. Code generation is defined separately.

Pro

  • exposes the VM to the JIT subsystem
  • easy for Parrot. no special actions need be taken on source (the build system would need to be tweaked to support this)

Con

  • requires supported JIT systems to provide a C-to-JIT compiler. AFAIK, only LLVM has this
  • does not address run-time code generation, the main purpose of JIT

Structured

The code defining important portions (potentially all, but at least in common hot-paths), including all code that does runtime code generation is defined in a new, structured language, defined specifically for the purpose. Supported execution strategies (C, various JIT systems, etc) are implemented as backends to the compiler.

Probable language features

  • inline C sections to ease transition, also for when you really need C
  • likely similar in capabilities to C, as it also needs to be translated to C
    • similar type system
    • similar memory access capabilities (eg: arrays, pointer arithmetic, etc)
  • runtime generated sections
    • templates to be populated at runtime (but compiled to JIT-internal form at compile time)
    • types probably need to be values for this to work (eg: type_t x = short; )

Pro

  • easier to work with than ops => more of the VM exposed to the JIT => better inlining
  • possible to define runtime code generation behaviour (in a nice, declarative way, possibly)
  • don't need to implement a C compiler to support another JIT system
  • allows for working in a nicer language than C (for whatever values of nice we choose)

Con

  • requires defining, agreeing upon, a language
  • requires implementing said language and integrating into the build system (possible bootstrapping)

See also

 Lorito bytcode security vs jittability, Peter Lobsinger