|3|| || 1. Overall Parrot Performance. This can be subdivided into several subtopics:
|4|| || A. Although the new calling conventions are now performing at a level comparable to Parrot speed prior to the refactor, one of the primary purposes of the refactor was to set the stage to get a *substantial* improvement in the overall speed of subroutine calls, and this has yet to be acheived. So while there are congratulations to be had all around on the PCC accomplishments thus far, in some ways the work is actually yet incomplete.
|5|| || B. Profiling. The profiling tools are excellent -- where people help both Rakudo/NQP as well as Parrot itself is to be doing profiles of the generated code. I highly recommend profiling things run in NQP -- it's much smaller than Rakudo, yet representative of the issues facing all HLLs.
|6|| || C. Review the code generated by the compiler tools and find places to reduce inefficiencies (both in what the compiler tools produce and what Parrot provides). But BEWARE, for here be dragons, and I've already had a fair bit of frustration in discussing the issues with others on this front. Many people look at the PAST/POST output and quickly conclude that the tools are pretty stupid, because they seem to produce terribly inefficient and convoluted code. But PAST and POST aren't stupid -- rather, they're often doing the only thing they can possibly do to make things work given Parrot's underlying semantics and available opcodes. The fetch and vivify operations are an example here -- PCT *by necessity* (because of Parrot's underlying limitations) has to generate five or six lines of PIR for every fetch and vivify operation. In some very limited situations this can be optimized, but AFAICT in the common and general case it cannot.
| ||3|| http://lists.parrot.org/pipermail/parrot-dev/2011-January/005410.html, with thread continuing in http://lists.parrot.org/pipermail/parrot-dev/2011-February/005431.html
| Indeed, it was studying the PIR output of PCT that caused Austin and Geoffrey to want to do something to make it more efficient and proposed solutions to Trac. Alas, both of their proposals were fundamentally flawed (which shows how easy it is to over-simplify the problem), but from Austin's proposal we did eventually identify a very workable solution using new "fetch" and "vivify" opcodes.
| 2. Develop a clean model for expressing HLL assignment and bind semantics commonly used by HLLs. Currently Parrot's container/value and lvalue/rvalue semantics are seriously convoluted. My preference at this point would be to simply implement fetch/vivify opcodes while a plan for a cleaner model is being developed. (Given the depth of the problems involved, I sincerely doubt a new model can be ready prior to 2.6.)
| 3. Currently there are few mechanisms in Parrot for being able to build constant HLL-specific data structures and have them frozen into bytecode. Instead, the data structures always have to be rebuilt at runtime. Indeed, this is one of the primary reasons why Rakudo startup takes so long-- it's having to manually build a lot of the tables and data structures needed to perform compilation and execution. For every Perl 6 subroutine (including all of the operators and built-in methods), at program startup we have to run through a bunch of initialization code that builds the signature data structure for each sub and attaches it to the sub. (In our new version of Rakudo we're going to be doing a lot more of our constants initialization lazily at runtime, but it would be far better if we didn't have to do it at runtime at all.)
| 4. Change Parrot's release expectations and support policies to recognize the reality of HLL and Parrot development as they exist today, and not as they might exist in at some point in the future when things are "stable".
| Official Parrot policy is that HLL developers and library writers are supposed to target "the latest supported (6-month) release"; sometimes we give lipservice to the idea that HLLs should at least limit themselves to using monthly releases; the reality for Rakudo is that the absolute longest we've been able to make use of any Parrot monthly release is five days. (The average is about two days, although sometimes the time is actually measured in hours.)
| The current reality is simply this: Big changes to Parrot typically land in the hours following a monthly release. Thus even though an important Parrot feature needed by a HLL might be completed in a branch in early April, it won't appear in trunk until after the April release, and it won't be in any "official" Parrot release until the following May, it won't be in a "supported release" until the following July. If a deprecation was involved, it's not in a supported release until the following January.
| I'm not saying we should change the way changes get committed to Parrot; I am saying we should recognize that at this stage of HLL development on Parrot, six weeks, three months, and nine months are awfully long times for HLL developers to be waiting to officially release products that make use of new core Parrot features. Any decisions or statements that Parrot makes that are based on HLL developers following the official policy are frankly ludicrous.
| 5. Come up with a good way to force a sub to exit, rolling up the intermediate call chain in the process. Essentially this perhaps comes down to invoking a sub's return continuation, except that we also have to make sure that any exit handlers for intermediate blocks in the chain are also executed in the process. (For that matter, we have to figure out what subroutine exit handlers might look like in order to accommodate this.)
| 6. Figure out a way for lexical symbols to refer to something other than a PMC. Right now all lexical symbols must be PMCs, which severely limits the ways in which a HLL can try to influence code generation making use of lexical variables in Parrot. For Rakudo in particular, it means we have no real mechanism for expressing "my int $a" such that $a refers to an integer register and also available in nested lexical scopes.
| I'm sure other HLL folks can come up with more. If you're wondering why we haven't mentioned some of these before now, it's because many of these are far less critical than other features we've needed, and those critical needs weren't being timely addressed.