Purpose of this package is to run a series of test in sequence or parallel and thus test
We combine JUnit, a customized junit Runner implementation, the priint::pdf renderer, the Java library version of the pdf renderer and a Java client stub for the Comet3 SOAP service.
JUnit is used to collect tests, trigger test execution and assemble test results
The customized Runner implementation CometTestSequenceRunner
is responsible to resolve test
dependencies, observe test execution restrictions and run tests in a particular order
The pdf renderer and Java library version of the renderer are used to trigger typical series of requests when generating pages or loading / synchronizing placeholder content.
The Java client stub for the Comet3 SOAP service can be used to add functionality not covered by the priint:pdf renderer, such as check out and checkin documents. This helps to simulate typical InDesign Desktop sessions.
All in all, the solution can be used both to test
... with many tests connected to the same data source and running concurrently and / or in sequence.
... with some or many tests running concurrently and repeatedly executing complex client tasks (such as page generation).
When connected to a PublishingServer and using Publication Planner functionality (which involves checking out and in of documents), the PublishingServer must support W2ML documents (at least, some W2ML documents must be available on the server).
This does not produce exactly the same load on server side like dealing with InDesign documents, particularly the network load is significantly lower, since W2ML documents usually are much smaller than InDesign documents.
Other aspects may also differ slightly from a "real multi concurrent user" environment. Depending on the usage scenario, it may be necessary to add further test to produce a realistic amount of load.
The loadtest package requires
This can be the Mac OS X terminal, any Linux terminal or Windows cygwin environment.
The loadtest scripts cannot be run in a Windows PowerShell or Command Prompt.
In addition to the JAR files and shell scripts included in this package, you need
You can write and run own tests with this package. We recommend using the custom test runner implementation described below to handle test dependencies and run tests in a particular sequence.
Run the test
shell script on top level of the loadtest package folder to run a test.
The test script requires some input and parameters:
Input is read from STDIN, whereas each line of input denotes one single test to run.
If feed from a file or a pipe, these tests are run parallel.
Typical usage:
printf " -testrun 1\n-testrun 2\n-testrun 4\n"|\ ./test -config ~/config.xml -native-libs /opt/priint/comet_pdf
This will run three tests parallel, with the parameters of testruns 1, 2 and 4 defined in a file
named config.xml
in your home directory - provided, that the native renderer libraries
comet_pdf.#.(dylib|so|dll) are installed in /opt/priint/comet_pdf
You may find it more comfortable to use a file as input to control test execution.
This can be just a text file with definitions for one test each line.
The corresponding file for the example above would be
(test-control.txt)
-testrun 1 -testrun 2 -testrun 4
The tests then can be run with
./test -config ~/config.xml -native-libs /opt/priint/comet_pdf < test-control.txt
Run
./test -help
for a list of available parameters and their default or current values. They are also explained in the following:
Mandatory
Relative (from current working directory) or absolute path to the folder containing the native pdf renderer libraries.
Default value
-
Example
-native-libs ../../priint-renderer-pdf-4.2-R30782/lib/native/mac
Recommended
Relative or absolute path of the test configuration file
Default value
demo/configuration.xml
Example
-config ~/config.xml
Recommended
Relative or absolute path of folder for log and report output. If the folder does not exist, it will be created.
Default value
out
Example
-out ../test-reports/test17
Recommended
How many times to run the (parallel) tests
Default value
1
Example
-iterations 5
Recommended
Scope of test. Can be either package
or class
. In the first case, all tests found in a
particular package are run, in the latter only a specific class.
If run with the CometTestSequenceRunner
runner, all dependencies specified by the annotation
CometTestSequence
are resolved and dependend tests run in the specified order.
Default value
class
Example
-scope package
Recommended
Qualified name of target class or package.
Default value
com.priint.integration.pubserver.RunScriptCode
Example
-target com.johndoe.tests
Optional
If you implement own tests, you must specify the path (jar or folder), where the classes can be found. This classpath will be inserted in front of the standard loadtest package classpath.
Default value
Empty
Example
-classpath ~/dev/workspace/johndoe-tests/build
Optional
Path to alternative renderer config folder. The loadtest package ships with default configuration
(font mappings, color profiles, hyphenations etc.) in a folder named config
on top level of the
package. If rendering tests tasks require other settings, it's best to keep them outside of the
loadtest package and specify the location via the -settings parameter.
Default value
config
Example
-settings /opt/priint/profiles
The following arguments are usually set by the control file or read from stdin. Defaults can be defined as parameters of the test script, but (as all other parameters) they may be overwritten by test run definitions:
Optional
The default project identifier.
Default value
Empty
Example
-project defaultProject
Optional
The default workstation identifier.
Default value
Empty
Example
-workstation dev
Optional
The identifier of the default testrun
Default value
Empty
Example
-testrun 17
The following arguments are defined by the wrapper scripts, they should not be defined as
parameters of the test
script nor should they be set in particular testrun definitions:
-iteration
counter for current iteration-thread
counter for current threadRun the report
shell script on top level of the loadtest package to summarize results of
previously run tests.
Typical usage:
./report -out ../test-reports/test17 -type summary -format table
Run
./report -help
for a list of available parameters and their default or current values. They are also explained in the following:
The report script takes three parameters:
Recommended
This is the folder specified via the -out
parameter of a previous test. This folder is scanned
for juniper vintage report files and certain information is extracted from these files.
Default value
out
Example
-out ~/test-results-100-threads
Recommended
The type of report to be created.
Possible values are:
overview
: short overview, how many tests were run, how many succeeded and how many failedsummary
: summary for each test method (succeeded, failed) and times neededyrammus
: same as summary with columns / rows flippedby-testrun
: test method results by testrun (identifier of test definition), can be parameterized with value selectorby-thread
: test method results by thread, can be parameterized with value selectorby-iteration
: test method results by iteration (1st time, 2nd time etc.), can be parameterized with value selectorby-project
: test method results by project, can be parameterized with value selectorby-workstation
: test method results by workstation, can be parameterized with value selectorall
: priint all (one table per report type) , can be parameterized with value selectorThe latter six types allow to specify the value shown in the output table, see -value
argument.
Default is summary
Default value
summary
Example
-type all
Recommended
Which value to output (for by-testrun, by-thread, by-iteration, by-project and by-workstation).
Possible values are:
total
: the number of tests runsucceeded
: number of succeeded testsfailed
: number of failed testsmin-time
: minimal time neededmax-time
: maximum time neededavg-time
: average time neededsum
: total time neededall
: priint all (one table per value selector)Default value
summary
Example
-value total
Recommended
User friendly ('table') or parser friendly ('data') output.
Depending on this value, the result is written as a nicely formatted table to stdout or as CSV (by default) records.
Default value
table
Example
-format data
Optional
Only for data
output. Character or string to use as field delimiter
Default value
,
Example
-field-delimiter \;
Optional
Only for data
output. Character or string to use as record delimiter
Default value
\n
Example
-record-delimiter \t
Optional
Only for table
output. Width (in characters) of the table columns. Note: the first ("header") column
will get the double width. Usually, this leads to better readable results, but though this may change in
future.
Default value
10
Example
-column-width 15
Tests usually require configuration, which can be done in a (mandatory) configuration file.
An example can be found in the demo folder in the loadtest package.
Tests can be configured on three levels:
There are some fixed configuration properties, such as service endpoint, project name, user credentials, which can be configured on any level. Other properties can be defined as key value pairs, also on any level, for global namespace or for particular packages or classes.
A project typically defines the kind of connection (SOAP, ODBC, XML), service endpoint and project name.
These are settings applicable for any test connected to this project.
Projects are referred by identifier
(should be in format [a-zA-Z_][a-zA-Z0-9_]*), this is the value
provided as -project
parameter when running tests.
Example
<test-configuration> <projects> <project identifier="aio" service="http://localhost:40080/CometBridge/Comet3Service" project="aio" type="soap" context=""> </project> </projects> <!-- ... --> </test-configuration>
Beside properties
(explained later), no other attributes or elements are allowed in project configuration.
identifier
, again an arbitrary string
according to the pattern [a-zA-Z_][a-zA-Z0-9_]*. This is the value provided as -workstation
parameter
when running tests.
In the workstation configuration, we can override any of the project settings including properties.
Beside that, the attributes shown in the following example are supported:
Example
<test-configuration> <!-- ... --> <workstations> <workstation identifier="dev" user="dev" password="priintdev" /> </workstations> <!-- ... --> </test-configuration>
Finally, parameters specific to a certain test run are configured in the testrun sections. Again, in testrun we can override any of the values defined for project or workstation.
A testrun is also referred by identifier
, this is the value provided as -testrun
parameter when
running tests.
Usually, for test runs we define parameters specific to a certain test. These can be defined via free key value pairs (properties) as shown in the following example:
Example
<test-configuration> <!-- ... --> <testruns> <testrun identifier="ABC_1"> <properties> <property key="documentId" value="613571662" /> <property key="scriptCode"> int main() { showmessage("Hello world"); return 0; } </properties> </testrun> <!-- ... --> </test-configuration>
As seen in the example, the property value can be defined as attribute or content of the property element. If both are defined, content overrides the attribute value.
Properties can be defined on any level (project, workstation, testrun). Properties with the same key and scope (namespace) and
Example
<test-configuration> <projects> <project ...> <properties> <!-- since the build script is available project-wide, it's a good idea, to define this property on project level --> <property key="buildScriptId" value="1003" /> </properies> </project> </projects> <workstations> <workstation ...> <properties> <!-- on some workstations, we want to run an alternative build script --> <property key="buildScriptId" value="63876288" /> </properties> </workstation> </workstations> <testruns> <!-- the following testrun uses the workstation or project buildScriptId --> <testrun identifier="normal_run" ...> </testrun> <!-- the following testrun overrides the workstation and project setting (e.g. to test some error conditions or similar) --> <testrun identifier="invalid_script_id" ...> <properties> <property key="buildScriptId" value="63876290" /> </properties> </testrun> </testruns> </test-configuration>
Properties can be defined as global properties or for package or class scope.
When accessing properties, we try to find the closest match
This means: properties available in testrun always hide properties defined on higher level, no matter, if they are defined global or for a specific package or class.
Example
<test-configuration> <projects> <project ...> <properties> <property key="productId" value="0815" /> <property key="productId" targetClass="com.customer.integration.BulkTest" value="123" /> </properties> </project> </projects> <testruns> <testrun identifier="1"> <properties> <property key="productId" value="456" /> </properties> </testrun> <testrun identifier="2"> <properties> <property key="productId" value="789" targetClass="com.customer.integration.FailureTests" /> </properties> </testrun> </testruns> </test-configuration>
There are several considerable situations:
Property is accessed in testrun "1"
In this case, the value defined for this testrun always overrides the project-wide setting.
Property is accessed in testrun "2"
If the client class is com.customer.integration.FailureTests
, 789
will be found. Otherwise,
if the client class happens to be com.customer.integration.BulkTest
, 123
is found. Otherwise,
the value of productId is set to 0815
.
Conclusion
Properties allow to define test implementation specific values.
The different levels (project, workstation and testrun) allow to define appropriate defaults and override for certain situations.
The different scopes help to prevent from naming conflicts, if common keys are used in several test cases.
Beside the (very few) tests included in the package, own test implementations can be added and executed
using the test
shell script.
Tests must be defined as non-static Java methods annotated with @org.junit.Test
The following example shows a simple JUnit test:
package my.tests; import static org.junit.Assert.assertEquals; import org.junit.Test; public class MyTest { @Test public void doSomething() { int calculatedValue = 17; // will raise an exception and cause the test to fail, if // calculatedValue differs from 5: assertEquals(calculatedValue, 5); } }
There is not much more to knwo about this. You can define one or several tests in one class, each must have a unique name and accept no parameters.
If run with the standard JUnit test runner, all test methods found in one class are executed in any order (this means: in no particular order).
Normal JUnit tests should not depend on each other.
If a test class requires certain initialization, this can be managed via the @BeforeClass
and
@Before
annotations.
If a test class requires certain cleanup after running the tests, this can be achieved via the
@AfterClass
and @After
annotations.
See the JUnit5 documentation for more information.
You probably want to access properties defined in the configuration file (see above) in your tests.
This can be done as follows:
package my.tests; import com.priint.comet.loadtest.config.ConfigurationLoader; import com.priint.comet.loadtest.config.runtime.Configuration; public class MyTest { @Test public void doSomething() { Configuration configuration = ConfigurationLoader.getConfiguration(); // guess which value productId will have according to the example // above if we are in testrun "2"? String productId = configuration.getProperty("productId", this); assertEquals(Integer.parseInt(productId), 815); }
If tests must be run in a particular order, you should use the CometTestSequenceRunner
(like we
also do for the tests included in the loadtest package).
Usage is very simple:
package my.tests; import org.junit.Test; import org.junit.runner.RunWith; import com.priint.comet.test.lib.CometTestSequenceRunner; @RunWith(CometTestSequenceRunner.class) public class MyTest { @Test public void doSomething() { // ... }
If tests depend on other tests to be run before or after, this can be controlled via the
@CometTestSequence
annotation.
Usage is also simple, though there are some pitfalls.
Example
package my.tests; import org.junit.Test; import org.junit.runner.RunWith; import com.priint.comet.test.lib.CometTestSequenceRunner; @RunWith(CometTestSequenceRunner.class) @CometTestSequence( before={ Login.class } ) public class MyTest { @Test public void doSomething() { // ... }
This will cause all tests of the Login
class to be executed before MyTest.
Example 2
package my.tests; import org.junit.Test; import org.junit.runner.RunWith; import com.priint.comet.test.lib.CometTestSequenceRunner; @RunWith(CometTestSequenceRunner.class) @CometTestSequence( after={ OtherTest.class } ) public class MyTest { @Test public void doSomething() { // ... }
This will cause all tests of the OtherTest
class to be executed after MyTest (which implies
not before)
Common Pitfall
Consider the following situation:
package my.tests; import org.junit.Test; import org.junit.runner.RunWith; import com.priint.comet.test.lib.CometTestSequenceRunner; @RunWith(CometTestSequenceRunner.class) @CometTestSequence( after={ Logout.class } ) public class Login { @Test public void login() { // ... }
This is a obvious requirement: anytime after we logged in, we want to log out again.
With the MyTest
class implemented as shown above (with bothe before
and after
attributes), the
execution order would be:
This is certainly not, what you want. Since MyTest
requires Login
, we most probably don't want to
Logout
before MyTest
(propably also not before OtherTest
).
MyTest
therefore should be annotated as follows:
@RunWith(CometTestSequenceRunner.class) @CometTestSequence( before={ Login.class }, after={ OtherTest.class, Logout.class } ) public class MyTest { @Test public void doSomething() { // ... }
and OtherTest
as follows
@RunWith(CometTestSequenceRunner.class) @CometTestSequence( before={ // in case, OtherTest is run as single test Login.class }, after={ Logout.class } ) public class OtherTest { @Test public void doSomethingElse() { // ... }
This will result in the following execution order (as probably intended):
The strict
attribute allows to define classes, which must be run in exactly the given order
before executing this test
The methodOrder attribute allows to specify the order of running methods of this test class, if
more than one method is annotated as @org.junit.Test
The @CometTestRestriction
annotation allows to specify further restrictions when runntin a test.
Example
@RunWith(CometTestSequenceRunner.class) @CometTestSequence( after={ Logout.class } ) @CometTestRestriction(stopOnFailure=true) public class Login { @Test public void login() { // ... }
This enforces all tests to be aborted immediatly, if the Login class fails for any reason. You should define this restriction for classes (tests) only, which are really essential for subsequent tests.
The value
property of the @CometTestRestriction
annotation allows to control, how often and if methods
of a test clas may be executed.
Example
@RunWith(CometTestSequenceRunner.class) @CometTestSequence( after={ Logout.class } ) @CometTestRestriction( Restriction.ONLYONCE, stopOnFailure=true) public class Login { @Test public void login() { // ... }
Refer to the Javadoc API documentatoin for more possible values and their meaning.
Please refer to the Javadoc API documentation of the com.priint.comet.testlib
package for more
information
Once you implemented own tests (whether using the CometTestSequenceRunner or not), you can integrate them in the test execution script as follows:
test
script parametersExample
MyTest
including all dependent tests in the correct orderThen test script invocation would be as follows:
./test \ -classpath ~/Workspace/artifacts/my-test.jar\ -scope class -target my.tests.MyTest\ -native-libs ... etc.