Version 7 (modified by Austin_Hastings, 12 years ago)

--

The profiling runcore requires an unusual testing strategy for a number of reasons. This page exists to list those reasons and to serve as a way to ensure that any the eventual testing framework addresses all concerns.

Issues

randomness from timing information (solved)

A profile will contain a randomness in the form of timing information. It does not make sense to test for absolute values since tests must be able to succeed on slow machines as well as fast. It may make sense to test relative timing information, but even this is questionable. The best approach may be to enable the runcore to emit a canonical form of the profile in which all data that vary between runs (timing information, memory addresses) would be changed to constants. This would also allow testing that times were added correctly and would simplify sanity checking. Note: This problem seems to be solved. See  docs/dev/profiling.pod for the solution to timing randomness and control flow differences due to hash seed randomization.

data volume

The profile for even a short PIR program will be non-trivial in size. It must be easy for the testing code to specify which part of a profile it wants to test. Ideally the testing code would also avoid using PGE so that profiling tests could be run as part of coretest.

One solution is to use annotations to delimit the part of the code to be tested. The testing code could examine the first chunk of profiling data between two predefined annotations (e.g. .annotate "begin_profiling_test" 1 ... .annotate "end_profiling_test" 1). This would allow testing the profile of an arbitrary contiguous subset of a piece of PIR code. example:

pir_delimited_profiling_output_is(<<'PIR', <<'PROFILE', "...");
.sub main :main
  say "im not in ur profile"
  .annotate "begin_profiling_test", 1
  say "HELO"
  .local int i
  i = 3
 loop:
  dec i
  if i > 0 goto loop
  say "BYE"
  .annotate "end_profiling_test", 1
  noop
  i = 9
.end
PIR
OP:{x{line:4}x}{x{time:1}x}{x{op:say}x}
OP:{x{line:6}x}{x{time:1}x}{x{op:set}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:dec}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:lt}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:dec}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:lt}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:dec}x}
OP:{x{line:8}x}{x{time:1}x}{x{op:lt}x}
OP:{x{line:-1}x}{x{time:1}x}{x{op:say}x}
PROFILE

Possible strategy for dealing with a big fat batch of profiling data: have a thing that can take an array similar to the one below and figure out if a profile matches it, starting from any point. The primary question is to figure out what kinds of patterns I'll want to look for and how powerful the pattern matcher will need to be. Also, this is starting to sound a lot like a grammar. That may be a clue, especially since nqp-rx is distributed with Parrot.

[
  [type=op, count=1, data={name=say, line=4}],  #must have 'say' on line 4
  [type=?,  count=*],                           #0 or more lines of stuff I don't care about
  [type=op, count=2, data={name=say, line=10}], #another 'say' on line 10
]