README 31 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705
  1. This is the README file for the new MCell 3.1 testsuite.
  2. Sections:
  3. 1. Running the test suite
  4. 2. Analyzing errors during a run of the test suite
  5. 3. Extending the test suite
  6. =========================
  7. 1. Running the test suite
  8. =========================
  9. The test suite is launched using the top-level test script in this
  10. subdirectory, 'main.py'. Run without any arguments, it will attempt to run
  11. the entire test suite. The script depends upon some Python modules which
  12. are checked into the BZR repository under testsuite/system_tests, but it
  13. will automatically find those modules if the main.py script is in its
  14. customary location.
  15. When you run the script, it will look for a configuration file called
  16. 'test.cfg' in the current directory. A typical test.cfg will only need two
  17. lines in it:
  18. [DEFAULT]
  19. mcellpath = /home/jed/src/mcell/3.1-pristine/build/debug/mcell
  20. This specifies which mcell executable to test. If there are other relevant
  21. testing settings, they can be placed into this file and accessed by the
  22. test suite.
  23. The default configuration file may be overridden using the -c command-line
  24. argument to the script.
  25. The script can take a handful of command-line
  26. arguments, which are summarized briefly in the help message provided when
  27. you run:
  28. ./main.py -h
  29. The script can also take a '-T' argument to specify the location of the
  30. test cases. If you are running the script from this directory, you do not
  31. need to specify the -T flag, as they will be found automatically with the
  32. default layout. The argument to -T should be the path to the mdl/testsuite
  33. subdirectory of the source tree for normal usage (or a copy thereof).
  34. By default, all test results will be deposited under a subdirectory
  35. 'test_results' created below the current directory. You may override this
  36. using '-r' and another directory name. BE CAREFUL! Presently, whatever
  37. directory is being used for test results will be entirely cleaned out
  38. before the test suite begins to run.
  39. XXX: Might it not be safer to have it move the old directory to a new name?
  40. Perhaps worth investigating. On the other hand, this will, by
  41. default, result in the rapid accumulation of results directories.
  42. Still, probably better to fail to delete a thousand unneeded files
  43. than to delete one unintended...
  44. main.py also takes '-v' to increase verbosity. Notable levels of verbosity:
  45. 0: (default) Very little feedback as tests are running
  46. 1: Brief feedback as tests are running (. for successful tests, F for
  47. failures, E for errors)
  48. 2: Long feedback as tests are running (single lines indicating
  49. success or failure of each test, color coded if the output is a
  50. tty).
  51. Use '-l' to display a list of all of the tests and test collections that
  52. main.py "knows" about. Any of these may be included or excluded. For
  53. instance, right now, main.py -l shows:
  54. Found tests:
  55. - reactions : Reaction tests
  56. - numericsuite : Numeric validation for reactions
  57. - tempsuite : Shortcut to currently developing test
  58. - macromols : Macromolecule tests
  59. - numericsuite : Numeric validation for Macromolecules
  60. - errorsuite : Test error handling for invalid macromolecule constructs in MDL files
  61. - parser : Parser torture tests
  62. - vizsuite : VIZ output tests for DREAMM V3 modes
  63. - oldvizsuite : VIZ output tests for ASCII/RK/DX modes
  64. - fasttests : All quick running tests (valid+invalid MDL)
  65. - (quicksuite)
  66. - (errorsuite)
  67. - errorsuite : Test error handling for invalid MDL files
  68. - rtcheckpointsuite : Basic test of timed checkpoint functionality
  69. - quicksuite : A few quick running tests which cover most valid MDL options
  70. - allvizsuite : VIZ output tests for all modes (old+new)
  71. - (vizsuite)
  72. - (oldvizsuite)
  73. - kitchensinksuite : Kitchen Sink Test: (very nearly) every parser option
  74. - regression : Regression tests
  75. - suite : Regression test suite
  76. The indentation is significant, as it indicates subgroupings within the
  77. test collections. Note that some of the test collection names are
  78. parenthesized. These are collections which are redundant with the other
  79. collections in the suite and will not be included a second time, but were
  80. added to simplify running simple subsets of the entire test suite.
  81. By default, all tests will be run. To select just a subset of the tests,
  82. use:
  83. ./main.py -i <ident>
  84. where <ident> is a path-like identifier telling which test collection to
  85. run. Given the above output from "main.py -l", valid options include:
  86. reactions
  87. reactions/numericsuite
  88. parser
  89. parser/fasttests
  90. parser/fasttests/quicksuite
  91. parser/allvizsuite
  92. regression
  93. To exclude a subset of the tests, use:
  94. ./main.py -e <ident>
  95. Note that the -i and -e arguments are processed from left to right on the
  96. command-line, so you may do something like:
  97. ./main.py -i parser -e parser/allvizsuite
  98. to run the "parser" tests, skipping over the "allvizsuite" subgroup.
  99. Unless some -i arguments are specified, the initial set of included tests
  100. is the complete set, so you may also do:
  101. ./main.py -e macromols/numericsuite
  102. to run all tests except for the numeric validation tests in the
  103. macromolecules test suite.
  104. Finally, the list of test suites to include may be configured in the
  105. test.cfg file by adding a 'run_tests' directive to the test.cfg file,
  106. consisting of a comma-separated list:
  107. test.cfg:
  108. [DEFAULT]
  109. run_tests=parser/errorsuite,regression
  110. mcellpath=/path/to/mcell
  111. This way, the exact set of tests to run can be tailored to the particular
  112. configuration file.
  113. =========================
  114. 2. Analyzing errors during a run of the test suite
  115. =========================
  116. If errors are reported during a run of the test suite, you should get an
  117. informative message from the test suite. In many cases, these messages
  118. will be related to the exit code from mcell. For instance, here is an
  119. example run, edited for brevity, of the regression test suite on an old
  120. version of mcell:
  121. Running tests:
  122. - regression/suite
  123. ..
  124. ..
  125. ======================================================================
  126. FAIL: test_010 (test_regression.TestRegressions)
  127. ----------------------------------------------------------------------
  128. Traceback (most recent call last):
  129. File "../mdl/testsuite/regression/test_regression.py", line 128, in test_010
  130. mt.invoke(get_output_dir())
  131. File "./system_tests/testutils.py", line 332, in invoke
  132. self.__check_results()
  133. File "./system_tests/testutils.py", line 350, in __check_results
  134. assert os.WEXITSTATUS(self.got_exitcode) == self.expect_exitcode, "Expected exit code %d, got exit code %d" % (self.expect_exitcode, os.WEXITSTATUS(self.got_exitcode))
  135. AssertionError: ./test_results/test-0010: Expected exit code 0, got exit code 139
  136. ----------------------------------------------------------------------
  137. Ran 10 tests in 49.084s
  138. FAILED (failures=5)
  139. The significant line to look for is the "AssertionError" line, which tells
  140. us two things:
  141. AssertionError: ./test_results/test-0010: Expected exit code 0, got exit code 139
  142. First, it tells us which subdirectory to look in for exact details of the
  143. run which caused the failure, and it will give us a message which hints at
  144. the problem. In this case, the run exited with code 139, which is signal
  145. 11 (SIGSEGV).
  146. On a UNIX or Linux machine, the exit codes generally follow the following
  147. convention:
  148. 0: normal exit
  149. 1-126: miscellaneous errors (MCell always uses 1)
  150. 127: can't find executable file
  151. 129-255: execution terminated due to signal (exit_code - 128)
  152. The signals which may kill an execution are (note that the numbering is
  153. taken from a Linux machine, and some of the signals may be numbered or
  154. named differently on other systems, though many of these signal numbers are
  155. standard, such as 11 for SIGSEGV. Type 'kill -l' or see the signals man
  156. page on the system in question for more details on the name <-> number
  157. mappings.)
  158. sig exit code name/desc
  159. 1 129 SIGHUP - unlikely in common use
  160. 2 130 SIGINT - user hit Ctrl-C
  161. 3 131 SIGQUIT - user hit Ctrl-\
  162. 4 132 SIGILL - illegal instruction -- exe may be for a different CPU
  163. 5 133 SIGTRAP - unlikely in common use
  164. 6 134 SIGABRT - abort - often caused by an assertion failure
  165. 7 135 SIGBUS - bus error -- less likely than SIGSEGV, but similar meaning
  166. 8 136 SIGFPE - floating point exception
  167. 9 137 SIGKILL - killed by user/sysadmin
  168. 10 138 SIGUSR1 - should not happen
  169. 11 139 SIGSEGV - accessed a bad pointer
  170. 12 140 SIGUSR2 - should not happen
  171. 13 141 SIGPIPE - unlikely in context of test suite
  172. 14 142 SIGALRM - should not happen
  173. 15 143 SIGTERM - manually killed by user/sysadmin
  174. Higher-numbered signals do exist, though the numbering becomes less
  175. consistent above 15, and the likelihood of occurrence is also much lower.
  176. In practice, the only signals likely to be encountered, barring manual user
  177. intervention, are SIGABRT, SIGFPE, and SIGSEGV, with SIGILL and SIGBUS
  178. thrown in very rarely.
  179. Returning to the example above, let's look at the files produced by the
  180. run. Looking at the contents of the test_results/test-0010 directory, I
  181. see:
  182. total 8
  183. -rw-r--r-- 1 jed cnl 293 2009-03-13 17:27 cmdline.txt
  184. -rw-r--r-- 1 jed cnl 0 2009-03-13 17:27 poly_w_cracker.txt
  185. -rw-r--r-- 1 jed cnl 163 2009-03-13 17:27 realerr
  186. -rw-r--r-- 1 jed cnl 0 2009-03-13 17:27 realout
  187. -rw-r--r-- 1 jed cnl 0 2009-03-13 17:27 stderr
  188. -rw-r--r-- 1 jed cnl 0 2009-03-13 17:27 stdout
  189. The important files here are:
  190. cmdline.txt: the exact command line required to reproduce this bug
  191. realout: what mcell printed to out_file
  192. realerr: what mcell printed to err_file
  193. stdout: what mcell sent to stdout (usually should be empty)
  194. stderr: what mcell sent to stderr (usually should be empty)
  195. The contents of cmdline.txt from this run are:
  196. executable: /home/jed/src/mcell/3.1-pristine/build/debug/mcell
  197. full cmdline: /home/jed/src/mcell/3.1-pristine/build/debug/mcell -seed 13059 -logfile realout -errfile realerr -quiet /netapp/cnl/home/jed/src/mcell/3.2-pristine/mdl/testsuite/regression/10-counting_crashes_on_coincident_wall.mdl
  198. The full command-line should use absolute paths, so you should be able to
  199. use it to start a gdb session which should replicate this exact problem if
  200. it is repeatable.
  201. Likewise, any files which the test case should have produced will appear
  202. under this directory. This should allow you to examine the reaction and
  203. viz output files for necessary clues.
  204. =========================
  205. 3. Extending the test suite
  206. =========================
  207. Generally, adding tests to the test suite requires three steps:
  208. 1. write the mdl files
  209. 2. write the Python code to validate the resultant output from the mdl
  210. files
  211. 3. hook the test case into the system
  212. -----------
  213. 1. writing the MDL files
  214. -----------
  215. Writing the mdl files should be self explanatory, so I will focus on the
  216. other two pieces. It's worth including a few brief notes here, though.
  217. First, the MDL should be written to produce all output relative to the
  218. current directory. This makes it easy for the test suite scripts to
  219. manage the output test results. If additional command-line arguments are
  220. needed, they may be specified in the Python portion of the test case.
  221. Generally, I've started each test case in the test suite with a block
  222. comment which explains the purpose of the test along with an
  223. English-language description of the success and failure criteria. For
  224. instance:
  225. /****************************************************************************
  226. * Regression test 01: Memory corruption when attempting to remove a
  227. * per-species list from the hash table.
  228. *
  229. * This is a bug encountered by Shirley Pepke (2008-04-24). When a
  230. * per-species list is removed from the hash table, if the hash table has a
  231. * collision for the element being removed, and the element being removed
  232. * was not the first element (i.e. was the element which originally
  233. * experienced the collision), memory could be corrupted due to a bug in the
  234. * hash table removal code.
  235. *
  236. * Failure: MCell crashes
  237. * Success: MCell does not crash (eventually all molecules should be
  238. * consumed, and the sim should run very fast)
  239. *
  240. * Author: Jed Wing <[email protected]>
  241. * Date: 2008-04-24
  242. ****************************************************************************/
  243. This convention seems like a good way to keep documentation on the
  244. individual tests. Some of the subdirectories also have README files
  245. which contain brief summaries of the purpose of the tests and some
  246. details about the individual tests, but I consider those to be secondary.
  247. Still, for any commentary which does not pertain to a single specific
  248. test, or which is too long or complex to include in a block comment in
  249. the MDL file, creating or adding to a README file is probably a good way
  250. to capture the relevant information.
  251. -----------
  252. 2. writing the Python code
  253. -----------
  254. I've written utilities to help with validating most aspects of MCell
  255. output; the utilities are not comprehensive, but they allow many types
  256. of validation of reaction data outputs, and make a valiant effort at
  257. validating viz output types. I will produce a reference document for
  258. the various utilities, but will discuss a few of them briefly here.
  259. a. Python unittest quick introduction
  260. The MCell test suite is based on Python's unittest system, for which
  261. I'll give only a quick summary here. More documentation is available
  262. in the Python documentation.
  263. A Python unittest test case is a class, usually with a name which
  264. starts with "Test", and which subclasses unittest.TestCase:
  265. class TestCase1(unittest.TestCase):
  266. pass
  267. Inside the test case will be one or more methods with names that start
  268. with "test". This is not optional (at least for the simple usage I'm
  269. describing here). Python unittest uses the method names to
  270. automatically pick out all of the individual tests. Test cases may
  271. also include a setUp method and a tearDown method. Each test case is
  272. run, bracketed by calls to setUp and tearDown if they are defined.
  273. Inside the test cases must be a little bit of Python code which tests
  274. some aspect of whatever you are testing. These are tested using
  275. either Python 'assert' statements, as, or using various methods
  276. inherited from unittest.TestCase whose names begin with 'fail', such
  277. as 'failIfEqual', 'failUnlessEqual', or simply 'fail':
  278. failUnlessAlmostEqual # approximate equality
  279. failIfAlmostEqual # approximate equality
  280. failUnlessEqual # exact equality
  281. failIfEqual # exact equality
  282. failIf # arbitrary boolean
  283. failUnless # arbitrary boolean
  284. failUnlessRaises # check for expected exception
  285. fail # fail unilaterally
  286. The existing tests are (for some reason?) written using assert, rather
  287. than using the 'fail' methods. I'll have to ask myself why I did it
  288. that way next time I'm talking to myself. This means that if you run
  289. the test suite with -O, it will not work. At some point soon, I may
  290. convert it over to use the 'fail' methods which will not be disabled
  291. by the -O flag.
  292. [Another performance tweak that might help the test suite to run a
  293. little faster would be to enable psyco in main.py. psyco generally
  294. improves performance anywhere from a factor of 2 to a factor of a
  295. hundred. Obviously, it won't make the mcell runs themselves any
  296. faster...]
  297. So, an example test case might be:
  298. class TestReality(unittest.TestCase):
  299. def setUp(self):
  300. self.a = 1234
  301. def tearDown(self):
  302. del self.a
  303. def test_a(self):
  304. self.failUnlessEqual(self.a, 1234)
  305. def test_b(self):
  306. if 3*3 != 9:
  307. self.fail("3*3 should be 9.")
  308. def test_c(self):
  309. # Check equality to 7 decimal places
  310. self.failUnlessAlmostEqual(1./3., 0.333333333, places=7)
  311. Generally, we don't know the order in which the tests are run, but one
  312. possible order for the method calls above is:
  313. setUp
  314. test_a
  315. tearDown
  316. setUp
  317. test_b
  318. tearDown
  319. setUp
  320. test_c
  321. tearDown
  322. Traditionally, the bottom of a file containing one or more TestCase
  323. subclasses will have the following two lines:
  324. if __name__ == "__main__":
  325. unittest.main()
  326. This way, if the file is invoked as a script from the command-line, it
  327. will automatically run all of the tests found in this module.
  328. Now, these automatically built test suites are not the only way to
  329. aggregate tests. unittest includes utilities for automatically
  330. scanning a test case for all tests which match a particular pattern.
  331. To do this, create a top-level function (i.e. not a method on your
  332. test case) which looks like:
  333. def vizsuite():
  334. return unittest.makeSuite(TestParseVizDreamm, "test")
  335. In the above example, TestParseVizDream is the name of a TestCase
  336. subclass, and we've asked unittest to round up all of the methods from
  337. that test case whose names start with 'test' and treat them as a test
  338. suite. You may also create test suites which aggregate individual
  339. tests or other test suites. To do this, again create a top-level
  340. function which looks like:
  341. def fullsuite():
  342. # Create a new suite
  343. suite = unittest.TestSuite()
  344. # add test 'test_viz_dreamm1' from the class TestParseVizDream
  345. suite.addTest(TestParseVizDreamm("test_viz_dreamm1"))
  346. # add the test suite 'vizsuite' defined above in a function
  347. suite.addTest(vizsuite())
  348. # return the newly constructed suite
  349. return suite
  350. These aggregated test suites may be exposed in various ways via the
  351. top-level generic test runner I've written, and which I'll explore a
  352. little bit in the next section, and in more detail when I explain how
  353. to hook the new tests into the system.
  354. We will deviate from the established formulas in a few minor ways.
  355. The most important is that:
  356. if __name__ == "__main__":
  357. unittest.main()
  358. becomes:
  359. if __name__ == "__main__":
  360. cleandir(get_output_dir())
  361. unittest.main()
  362. b. MCell tests introduction
  363. First, let's take a quick look at one of the regression tests.
  364. We'll start with the simplest possible test -- one which merely
  365. checks that the run didn't crash. To add this, all I did was to add
  366. the mdl files under mdl/testsuite/regression, and then add a brief
  367. function to test_regression.py. The test cases are generally named
  368. XX-whatever.mdl, though that may need to change as the test suite
  369. grows.
  370. The general structure of an MCell testcase is:
  371. 1. construct an McellTest object (or subclass thereof)
  372. 2. populate the object with parameters detailing what we
  373. consider to be a successful run
  374. 3. call obj.invoke(get_output_dir()) on the object to actually
  375. run the test.
  376. For many purposes, the McellTest class itself will suffice, but in
  377. some cases you may find yourself including the same bits of setup in
  378. a number of different tests, in which case it's probably worth
  379. moving the tests into a custom subclass of McellTest.
  380. Here is an example test drawn from the regression test:
  381. def test_010(self):
  382. mt = McellTest("regression",
  383. "10-counting_crashes_on_coincident_wall.mdl",
  384. ["-quiet"])
  385. mt.set_check_std_handles(1, 1, 1)
  386. mt.invoke(get_output_dir())
  387. As in normal Python unit tests, the name of the method doesn't
  388. matter, but it must be a method on a class which subclasses
  389. unittest.TestCase, and the name should start with 'test'.
  390. The arguments you see above in the construction of McellTest are:
  391. 1. "regression": an indication of which section of the test suite
  392. the run is part of (not really important, but it allows
  393. overriding configuration options for specific subsections
  394. of the test suite)
  395. 2. "10-counting_crashes_on_coincident_wall.mdl": the name of the
  396. top-level MDL file to run (path is relative to the location
  397. of the script containing the Python code, and is generally
  398. in the same directory at present)
  399. 3. ["-quiet"]: a list of additional arguments to give to the run.
  400. Generally, the run will also receive a random seed, and
  401. will have its log and error outputs redirected to a file.
  402. I provide '-quiet' to most of the runs I produce so that
  403. only the notifications I explicitly request will be turned
  404. on.
  405. The next line:
  406. mt.set_check_std_handles(1, 1, 1)
  407. tells the test to close stdin when it starts and to verify that
  408. nothing was written to stdout or stderr. This is almost always a
  409. good idea -- we can be sure that all output is properly redirected
  410. either to logfile or errfile, rather than being written directly to
  411. stdout/stderr using printf/fprintf.
  412. And finally:
  413. mt.invoke(get_output_dir())
  414. runs the test. Unless otherwise specified (see the reference
  415. document for details), McellTest expects that mcell should exit with
  416. an exit code of 0, so we don't need to add any additional tests.
  417. c. Brief introduction to test utilities
  418. In most cases, our job isn't quite that simple, and in these cases,
  419. there are various ways to add additional success criteria. For
  420. instance:
  421. mt.add_extra_check(RequireFileMatches("realout",
  422. '\s*Probability.*set for a\{0\} \+ b\{0\} -> c\{0\}',
  423. expectMaxMatches=1))
  424. This statement says that in order for the run to be considered
  425. successful, the file 'realout' (i.e. the logfile output from mcell)
  426. must contain a line which matches the regular expression:
  427. '\s*Probability.*set for a\{0\} \+ b\{0\} -> c\{0\}',
  428. and we've further specified that it must match it at most once. (By
  429. default, it must match at least once, so this means it must match
  430. exactly once.) Again, see the reference document for details on
  431. RequireFileMatches and other similar utilities.
  432. There are also similar utilities for checking various aspects of
  433. reaction data output and similarly formatted files. For instance,
  434. consider:
  435. mt.add_extra_check(RequireCountConstraints("cannonballs.txt",
  436. [(1, 1, -1, 0, 0, 0), # 0
  437. (0, 0, 0, 1, 1, -1), # 0
  438. (0, 0, 1, 0, 0, 0), # 500
  439. (0, 0, 0, 0, 0, 1)], # 500
  440. [0, 0, 500, 500],
  441. header=True))
  442. This represents a set of exact constraints on the output file
  443. 'cannonballs.txt'.
  444. This matrix:
  445. [(1, 1, -1, 0, 0, 0), # 0
  446. (0, 0, 0, 1, 1, -1), # 0
  447. (0, 0, 1, 0, 0, 0), # 500
  448. (0, 0, 0, 0, 0, 1)], # 500
  449. will be multiplied by each row in the output file (after removing
  450. the header line and the "time" column), and each result vector must
  451. exactly match the vector:
  452. [0, 0, 500, 500],
  453. The file is assumed to have a header line, though no specific check
  454. is made of the header line -- the first line is just not subjected
  455. to the test (because of the 'header=True' directive.)
  456. This type of constraint can be used to verify various kinds of
  457. behavioral constraints on the counting. For instance, it can verify
  458. that the total number of bound and unbound molecules of a given type
  459. is constant. Any constraint which can be formalized in this way may
  460. be added just by adding another row to the matrix and another item
  461. to the vector.
  462. Similarly, equilibrium may be verified for count files using
  463. something like:
  464. t.add_extra_check(RequireCountEquilibrium("dat/01-volume_highconc/V_out.dat",
  465. [500] * 26,
  466. # [25] * 26,
  467. ([25] * 15) + ([500] * 3) + ([25] * 7) + [500],
  468. header=True))
  469. The first argument is the output file path relative to the result
  470. directory. The second is the expected equilibrium for each of the
  471. columns (again, excluding the time column). The third is the
  472. allowable tolerance. If, after finishing the run, the mean value of
  473. the column differs from the desired equilibrium by more than the
  474. tolerance, it will be counted a failure. In the above case, you can
  475. see that 4 of the 26 columns have been temporarily set to a
  476. tolerance of '500' to prevent the test from failing on 4 cases which
  477. fail due to known MCell issues whose fixes will be somewhat
  478. involved.
  479. For both this and the previous check type, you may specify min_time
  480. and max_time arguments to limit the checks to particular segments of
  481. time. For equilibrium, this will restrict the rows over which we
  482. are averaging to find the mean value.
  483. For more details on the specific test utilities I've provided, see
  484. the test utilities reference document.
  485. -----------
  486. 3. hooking the test case into the system
  487. -----------
  488. The test runner looks at the top-level directory of the test suite for
  489. a test_info.py file. test_info.py may define a few different
  490. variables which determine what tests are included when you run
  491. main.py, and what descriptions are shown when you run './main.py -l'.
  492. The first such variable is:
  493. subdirs = {
  494. "macromols" : "Macromolecule tests",
  495. "parser" : "Parser torture tests",
  496. "reactions" : "Reaction tests",
  497. "regression" : "Regression tests"
  498. }
  499. This specifies that the test suite runner should look in
  500. 4 different subdirectories of the directory where test_info.py was
  501. found: macromols, parser, reactions, and regression. It also provides
  502. descriptions for each of these subdirectories which are displayed in
  503. the '-l' output. Each subdirectory should have a test_info.py file
  504. which indicates which subdirectories contain tests.
  505. The second such variable is:
  506. tests = {
  507. "oldvizsuite" : "VIZ output tests for ASCII/RK/DX modes",
  508. "vizsuite" : "VIZ output tests for DREAMM V3 modes",
  509. "errorsuite" : "Test error handling for invalid MDL files",
  510. "quicksuite" : "A few quick running tests which cover most valid MDL options",
  511. "kitchensinksuite" : "Kitchen Sink Test: (very nearly) every parser option",
  512. "rtcheckpointsuite" : "Basic test of timed checkpoint functionality"
  513. }
  514. This gives several named test suites, and descriptions to be displayed
  515. in the '-l' output. These test suites must be imported into the
  516. test_info.py file. The parser tests are defined in test_parser.py, so
  517. I included the following import statement at the top of the file:
  518. from test_parser import oldvizsuite, vizsuite, errorsuite, quicksuite
  519. from test_parser import kitchensinksuite, rtcheckpointsuite
  520. Any test suites included in the 'tests' map will be included in the
  521. full test suite.
  522. It may be desirable in some cases to define test suites which run a
  523. subset of the functionality, and this brings us to the third variable
  524. in a test_info.py file:
  525. collections = {
  526. "allvizsuite" : ("VIZ output tests for all modes (old+new)", ["oldvizsuite", "vizsuite"]),
  527. "fasttests" : ("All quick running tests (valid+invalid MDL)", ["errorsuite", "quicksuite"]),
  528. }
  529. This defines two collections of tests. These suites which are
  530. collected here are already collected in the tests variable above, but
  531. we may want to have other aggregations that we can quickly and easily
  532. run. Each entry in the collections map has the syntax:
  533. name : (desc, [suite1, suite2, ...])
  534. These collections will NOT be included in the top-level test suite by
  535. default, as the individual component tests are already included; the
  536. collections would, thus, be redundant.
  537. Now, the above 'tests' and 'collections' are from the test_parser
  538. subdirectory. The test suite system will create several collections
  539. of tests which may be included or excluded from the run:
  540. parser # all suites included in the 'tests' from parser/test_info.py
  541. parser/oldvizsuite
  542. parser/vizsuite
  543. parser/errorsuite
  544. parser/quicksuite
  545. parser/kitchensinksuite
  546. parser/rtcheckpointsuite
  547. parser/allvizsuite
  548. parser/fasttests
  549. This means that you do not need to explicitly create a separate test
  550. to include all of the suites in a directory.
  551. Note that more levels of hierarchy are possible -- parser could define
  552. 'subdirs' if we wanted to break the parser tests into several
  553. different subdirectories, for instance. But each level of the
  554. directory hierarchy must have its own test_info.py file.