|Version 3 (modified by bubaflub, 4 years ago)|
Find an existing test under t/ that is written use the perl test modules.
parrot tests tend to have a single perl test that generates multiple lines of output; Each of these multiple lines of output needs to be a single test in the parrot model.
For example, here's on old style single test with two outputs.
pir_output_is( <<'CODE', <<'OUT', 'string isa and pmc isa have same result' ); .sub main .local pmc class, obj class = new 'Class' obj = class.'new'() $I0 = isa obj, 'Object' print $I0 print "\n" .local pmc cl cl = new 'String' cl = 'Object' $I1 = isa obj, cl print $I1 print "\n" .end CODE 1 1 OUT
To convert this to pir tests, you simply strip off the surrounding perl wrapper and leave the heredoc; give the sub a unique name, for example, based on your test description.
.sub string_isa_and_pmc_isa_have_same_result .local pmc class, obj class = new 'Class' obj = class.'new'() $I0 = isa obj, 'Object' print $I0 print "\n" .local pmc cl cl = new 'String' cl = 'Object' $I1 = isa obj, cl print $I1 print "\n" .end
Next, we need to convert the output to use PIR's test more subs...
.sub string_isa_and_pmc_isa_have_same_result .local pmc class, obj class = new 'Class' obj = class.'new'() $I0 = isa obj, 'Object' ok ($I0, 'isa Class instance an Object') .local pmc cl cl = new 'String' cl = 'Object' $I1 = isa obj, cl ok ($I1, 'isa String instance an Object') .end
Finally, we need a harness to run this sub with the two tests:
.sub main :main .include 'include/test_more.pir' plan(2) string_isa_and_pmc_isa_have_same_result() .end
Finally, be sure to update the coda to be a PIR coda rather than the perl one.
Here is a brief list of some potential stumbling blocks and ways around them:
- There are a number of tests which confirm proper error reporting. This can be done in PIR in three ways. The simplest way is to use dies_ok(), throws_like() or throws_substring().
- You can also create an exception handler and check the exception message.
pasm_error_output_like( <<'CODE', <<'OUTPUT', "Malformed string: real part" ); new P0, 'Complex' set P0, "q + 3i" end CODE /Complex: malformed string/ OUTPUT
.sub exception_malformed_string__imaginary_part new P0, 'Complex' push_eh handler set P0, "q + 3i" pop_eh handler: .local pmc exception .local string message .get_results (exception) message = exception['message'] is( message, 'Complex: malformed string', 'Complex: malformed string' ) .end
Finally, you can create a EventHandler PMC and check for the appropriate error type. See t/pmc/ro.t for an example. This method may be preferable
- Some tests create new classes and add methods to those namespaces. In a consolidated file you will need to make sure the new class names don't collide (you'll probably find a number of Foo and Bar classes), and you will need to make sure you return to the root namespace after the test completes.
.namespace  # Return to the root namespace
- Many tests are skipped based on the operating system. You can test for this in PIR with the following:
.include "iglobals.pasm" .local pmc config_hash, interp interp = getinterp config_hash = interp[.IGLOBALS_CONFIG_HASH] $S0 = config_hash["os_name"] eq $S0, "MSWin32", win32fail ... .return() win32fail: skip( 10, 'skipping ___ tests under MSWin' )