Creating and Managing Test Runs

Test Runs log instances of the execution of a defined set of Test Cases. Test Runs may log results of manual tests, or automated tests run as part of a build. Test Runs also contain information about the status of the Test Run, results of instance of test execution the Test Run represents, the test environment, and the build tested.

An individual Test Run is based on a Test Run Template. Polarion products that support test management come with pre-defined templates which can be customized or used as the basis for custom Test Run Templates.

Customizing Test Run Templates

Test Run templates can save a great deal of time over time. You configure the properties of the template once, and then all Test Runs based on the template are pre-configured with the same properties (some of which Test Run creators can modify in Test Runs based on the template.)

To access Test Run Templates:

  1. Open the project containing the Test Cases that will be tested with Test Runs.

  2. In Navigation, click Test Runs. The Test Runs page opens. The top section of the page displays a table of existing Test Runs.

  3. Click the Manage Templates button on the toolbar of the Test Runs page. The table of Test Runs is replaced by a table of Test Run Templates.

    (You can filter Test Run Templates using query bubbles.)

    Figure 20.4. Filter the Test Run Template List

    Filter the Test Run Template List

A Test Run Template is essentially a Test Run, with configurable properties and page design. The main difference is that a template itself cannot be executed. Only Test Runs created from it can be executed. So your goal in customizing a Test Run Template is to set it up exactly as if it were a Test Run. You can customize any existing Test Run Template by modifying the Test Run Properties, or by modifying the underlying Widgets etc. that comprise the Test Run page, or both.

Any existing Test Run Template or Test Run can serve as the basis for a Test Run Template. To create one from an existing template, simply create a copy of the existing template and customize that. (If you want to customize one of Polarion's default Test Run Templates, always create a copy for customization so you don't lose the default one.) To create a Test Run template from an existing Test Run, simply open it, click (Actions) and select Save As Template.

To create a copy of a Test Run Template to customize:

  1. Access Test Run Templates as described above and select the template you want to duplicate and customize in the table of Test Run Templates.

  2. On the toolbar of the detail of the selected template (bottom part of the page), click (Actions) > Duplicate Template or Save as Template. (The text of the menu item depends on the Polarion version you are using, and/or the type of the template's page: LiveReport or Classic Wiki.)

  3. In the dialog, provide a unique ID for the new Test Run Template and a human-friendly Title and click the Duplicate Test Run Template button.

After creating the duplicate Test Run Template, you can proceed to modify it as described in the next sections. Note that if workflow is configured for the Test Run type, the Duplicate Template action resets the Status to the configured initial status, and invokes the configured initial workflow action, if any.

Editing Test Run Template Properties

Test Run Properties include such data as Test Run type, testing platform and environment, and Test Parameters. Every Test Run created using the modified Test Run Template will have the properties and values as specified in the template. A simple graphical user interface is provided for editing the template properties.

To modify Test Run Template properties:

  1. Access a Test Run Template as previously described.

  2. With the desired Test Run Template selected in the table of templates, click the Properties button on the toolbar of the template detail pane (lower part of the page).

You can edit the Title and the property fields in place, or click the Edit button to place all non-read only fields into edit mode.

Note that the values for some properties like test Type are defined in the global or project Testing configuration in Administration. You can select from configured values, but you cannot change the values themselves. For more information, see Administrator's Guide: Configuring Testing.

In a Test Run template, the Status field is not editable if a workflow configuration for Test Runs exists.

Specifying Test Case Selection

In the Test Run Template properties, it is important to specify the method of populating new Test Runs (created by users and based on the template) with Test Cases to be executed. You do this by setting a value for the Select Test Cases field. The field presents the following list of options:

  • Manually - Creators of new Test Runs based on the template can manually select the Test Cases to be executed, or the Test Cases may be manually selected by the author of the Test Run template, in which case they are pre-selected when a new Test Run is created from the template. When this option is selected a multi-valued field named Project Span appears. If you only need to select Test Cases from the current project, you can leave that field empty. If you want to be able to select Test Cases that are defined in one or more different projects, then specify those projects in Project Span. If, in addition to the other projects, you also want to select Test Cases from the current project then you must include it as one of the values specified in the Project Span field.

  • By Query on Create - Test Cases will be selected by a query which will automatically run when a new Test Run based on the template is created. The set of waiting Test Cases of the Test Run is static and will not change if new Test Cases that meet the query criteria are added after a Test Run is created but before it is executed, nor will any be added to the set if the Test Run is executed multiple times, even if new Test Cases that would meet the criteria have subsequently been added in the system since the last execution of the Test Run.

    When you select this option you must specify Lucene query syntax items in the Query field that will select the Test Cases that testers must execute in Test Runs based on the Template. You must specify this query before you can save changes to the Template.

  • By Query on Execute - Test Cases for Test Runs will be selected by a query which will automatically run when a Test Run based on the template is executed. The set of Test Cases waiting to be executed is not static. Waiting Test Cases are taken from the current set of Test Cases in the system meeting the query criteria at the time a user executes the Test Run, and could even change during the execution each time the view refreshes.

    When you select this option you must specify Lucene query syntax items in the Query field that will select the Test Cases that testers must execute in Test Runs based on the Template. You must specify this query before you can save changes to the Template.

    When this option is selected a multi-valued field named Project Span appears. If you only need to select Test Cases from the current project, you can leave that field empty. If you want to be able to select Test Cases that are defined in one or more different projects, then specify those projects in Project Span. If, in addition to the other projects, you also want to select Test Cases from the current project then you must include it as one of the values specified in the Project Span field. The query entered in the Query field should be valid for all specified projects, otherwise some desired Test Cases may not be added to the Test Run, or some Test Cases may be added that are not relevant. For example, if you specify a query like (type:testcase AND status:active AND testType:(manual user)), but in one of the projects the test type is configured as manual ui and not manual user, Test Cases from that project will not be added to your Test Run.

  • From LiveDoc on Create - All Test Cases contained in the specified Document will be selected for execution when a new Test Run based on the template is created. The set of waiting Test Cases of the Test Run is static and will not change if new Test Cases are added to the Document after a Test Run is created but before it is executed, nor will any be added to the set if the Test Run is executed multiple times.

    Specify the Document in the required Document field of the Test Run Template properties. Select a document from the drop-down menu. Preview it by clicking on the icon beside it, or just start typing a word contained within the document's name. Documents located outside the default Space will include the Space name before it. (Separated by a "/"). You can optionally specify a query in the Query field to select a subset of the Test Cases contained in the specified Document. For example, you might query for Test Cases with a Status value that is not "Draft" (e.g. NOT status:draft).

    Figure 20.5. Dynamic Document Selector

    Dynamic Document Selector

    If the list of documents is extensive and covers several Dynamic Document Selector Spaces, just type the Space name. Only documents within that space will be listed.

  • From LiveDoc on Execute - Test Cases appearing in the specified Document, including Test Cases referenced from another Document, will be selected for execution when a Test Run based on the template is executed. The set of Test Cases waiting to be executed is not static. Waiting Test Cases are taken from the current set of Test Cases appearing in the Document at the time a user executes the Test Run, and could even change during the execution each time the view refreshes. Referenced Test Case items can be from projects other than the Test Run and the specified Document, or frozen to a particular revision of the referenced Test Case, or both.

    Specify the Document in the required Document field of the Test Run Template properties. (See From LiveDoc on Create for the drop-down box's behaviour.) You can optionally specify a query in the Query field to select a subset of the Test Cases contained in the specified Document. For example, for a Smoke test, you might query for open manual type Test Cases (e.g. status:open AND type: manual).

  • Automation - Test Runs based on the template will be created by an automated process, and results of test execution will be imported to Polarion.

For 'By Query on Execute' or 'From LiveDoc on Execute' selection types. (Dynamic Test Runs):

  • When closing dynamic Test Runs by setting a previously unset finished date, usually by a workflow action, the number of waiting Test Cases gets stored in the Test Run as empty test records and no longer changes.

  • When reopening a dynamic Test Run, by clearing a previously set "Finished On" date – usually also by a workflow action, the number of waiting Test Cases becomes live again and corresponds to the current number of unexecuted Test Cases matching the Test Run query.

  • The query to retrieve or count awaiting Test Cases will therefore differ for finished and unfinished dynamic Test Runs.

Modifying A Test Run Template's Page

Test Runs and Templates are implemented as LiveReport type Pages. You can use the full range of editing features and Widgets to customize the layout and content of a Test Run Template.

To modify the underlying Page of a Test Run Template:

  1. In Navigation, select Test Runs.

  2. On the page toolbar, click the Manage Templates button. Then, in the table of Test Run Templates, select the one you want to modify.

  3. On the toolbar of the detail pane, click (Actions) > Customize Test Run Page.

An instance of the Page Designer opens in the detail panel of the Page. You can then modify the Page according to your needs. For more information, see the User Guide topic Working With Pages.

Creating a Test Run

To create a new Test Run in your project:

  1. In Navigation, click on Test Runs. The Test Runs page loads displaying a table of existing Test Runs.

  2. In the toolbar of the Test Runs table, click the Create button to launch the Create Test Run dialog.

  3. In the ID field, enter a unique ID. (The value you enter must not be used by another Test Run in the project.)

    Note that the ID field does not appear if your project has been configured to generate Test Run IDs automatically.

  4. Fill in the Title field with a human-friendly title for the Test Run. The title appears together with the ID in macro output, report Widgets, object enumerations, combo box lists and generated defects in the Polarion user interface, and also in email notifications. If you leave Title empty, only the ID appears.

    Providing a title is recommended to help users differentiate Test Runs in a listing, even if they have auto-generated IDs.

  5. In the Template field, select the desired Test Run template from the list of available templates.

  6. Click the Create Test Run button to create the new Test Run.

Modifying Test Run Properties

The values in the properties of a Test Run are inherited from the Test Run template from which it is derived. If you have the necessary permissions, you can modify a Test Run's properties. For example, if the template specifies one or more projects from which Test Cases can be selected in the Project Span field, then for a particular Test Run you might decide to modify the project selection, or clear the field entirely so that Test Cases can only be selected from the current project.

To edit Test Run properties:

  1. Select the Test Run in the table at the top of the Test Runs page.

  2. Click the Properties button in the detail of the selected Test Run.

  3. After saving all changes to the properties, click the Back button on the toolbar to close the properties.

Selecting (Planning) Test Cases

If a newly created Test Run is based on a Test Run Template that has been configured for manual selection of Test Cases (see Specifying Test Case Selection), you must manually select the Test Cases before you or others execute the Test Run. You can select the Test Cases using the Work Item Tracker, or select them in a Document (if they are defined in a Document).

Note that the Test Cases to be tested by the Test Run can be defined in the current project and/or in one or more other projects. For example, a project named Standard Tests might define a set of Test Cases that must always be tested for every application, while the current project contains Test Cases specific to an application or variant. When Standard Tests is specified in the Project Span field of the Test Run properties, you will be able to select Test Cases from it. If you also want to select Test Cases form the current project, you must include it in the Project Span field.

TIP

The Project Span property must specify all projects from which Test Cases are to be selected, including the current project, if applicable. If all Test Cases are in the current project, you can leave Project Span empty.

Tracker Approach

The following steps assume there is an existing Test Run configured for manual selection of Test Cases.

To begin the selection:

  1. Open your project if not already open.

  2. In Navigation, select Test Runs, and in the upper pane of the Test Runs page, select the Test Run for which you want to select Test Cases.

  3. In the Test Run detail pane, click (Actions) and choose Select Test Cases. (This action appears only when manual selection of Test Cases is specified in the Test Run properties.)

This action opens the Table view of the Work Items topic and displays the Test Run Planning sidebar .

Figure 20.6. Select Test Cases for a Test Run

Select Test Cases for a Test Run

Manual selection of Test Cases for a Test Run - add or remove "waiting" Test Cases

You can browse the Test Cases listed in the table and select those that testers should execute when executing the Test Run. The sidebar updates to show the total number of Test Case executions waiting. If some Test Cases have multiple Iterations defined for the current Test Run, these are reflected in the "Waiting" count, which may then be larger than the number of items selected in the table. Items in the table that are currently added as "Waiting" show a colored highlight at their left border (see figure above).

The items appearing in the table are the result of a query, which is built and run as a result of the Select Test Cases action of the Test Run. The query appears in the visual query builder on the page, and you can optionally modify it so that the table contains all the Test Cases you want for the Test Run.

To add a Test Case to the queue of waiting Test Cases, click the icon in its row in the table. To remove an added Test Case, click the icon that appears in its row (again, refer to the previous figure).

When you have added all Test Cases that should be executed by the Test Run, click the Save Test Run button at the bottom of the Test Run panel.

If you want to further configure the Test Run, or execute it, select the Test Runs topic in Navigation, then select the Test Run in the list of Test Runs.

TIPS

  • Clicking the Test Run ID shown in the Test Run Planning sidebar navigates directly back to the Test Run page.

  • Clicking on the Waiting label in the sidebar filters the table to list only waiting Test cases. The Executed filters the table to list only executed Test Cases.

  • You can select multiple items in the table (use the check box on each row) and add or remove them from the Waiting list all at once using the respective buttons in the Test Run Planning sidebar.

  • The Navigation scope can change depending on the setting in the Test Run's Project Span property. If multiple projects are specified in that setting, the scope switches to Repository. If just one project is specified, the scope switches to that project. If Project Span is empty, the scope remains in the current project.

  • To return to the Test Run after selecting Test Cases, regardless of current Navigation scope, click on the Test Run name in the Test Run Planning sidebar. If not already open, the project containing the Test Run opens and the Test Run page loads in the browser.

Document Approach

The following steps assume there is an existing Test Run configured for manual selection of Test Cases.

To begin the selection:

  1. Open your project if not already open.

  2. In Navigation, select Test Runs, and in the upper pane of the Test Runs page, select the Test Run for which you want to select Test Cases.

  3. In the Test Run detail pane, click (Actions) and choose Select Test Cases.

    The Table view of the Work Items topic opens.

  4. If it is not already opened, open the Test Run Planning sidebar using the Show Sidebar menu on the Document Editor toolbar.

  5. Go to Navigation, navigate to the space containing the Document that defines the Test Cases to be selected for the Test Run, and select that Document. The Document opens in the Document Editor, and the Test Run Planning sidebar remains open.

    If the Navigation node contains too many Documents you will need to locate and open the desired Document via the space Index page or search. After opening the Document, open the Test Run Planning sidebar as previously described.

You are now ready to add Test Cases from the Document to the Test Run. Please refer to the following figure:

Figure 20.7. Select Test Cases in a Document

Select Test Cases in a Document

You can optionally populate a Test Run with selected Test Cases contained in a Document

  • To add a Test Case, select it by clicking anywhere in its content (selection is indicated by the gold left border), then either click the icon in the left margin, or the Add button in the Test Run Planning sidebar.

    The count of waiting Test Cases is incremented in the Test Run panel. If the Test Case defines multiple Iterations, the "Waiting" count reflects the total number of Test Case executions, which may be higher than the number of Test Cases added to the Test Run's queue.

  • To add multiple Test Cases, select all of them (Shift + Click in most browsers), then click the Add button in the Test Run Planning sidebar.

    The count of waiting Test Case executions is incremented in the panel by the number of selected items plus additional Iterations, if any. Note that multiple items must be contiguous in order to be selected at once.

  • To remove a previously added item from the queue of waiting Test Cases, select the Test Case in the Document body, and click either the icon in the left margin, or the Remove button in the Test Run Planning sidebar.

  • After adding all the Test Cases you want to be tested in the Test Run, click the Save Test Run button in the sidebar.

    Testers can now execute the Test Run using the Execute Test button on the Test Run page.

TIP

When the Document is opened as described above, it is filtered by a query which is built and run as a result of the Select Test Cases action of the Test Run. You can optionally modify this filter query to increase or reduce the number of items displayed in the Document.

Planning From a Document Revision

Test managers may sometimes need to populate a Test Run with Test Case items from a specific version of a test specification Document that has been reviewed and approved.

To plan Work Items from a specific Document revision:

  1. Open the desired Document revision in the Document Editor (see User Guide topic Viewing Document History).

  2. With the revision open in the Document Editor, create a new branched Document ( Actions Menu > Branch). See User Guide topic Branching Documents for more detailed information.

    For example, suppose a Document named Test Specification that is under continuous development, and a Plan, Version 1.0, that should be populated from Revision 1001, which has been reviewed and approved for production. From Revision 1001 of Test Specification, you create a branched Document named, e.g. Test Specification Version 1.0 - Frozen.

  3. Create a new Test Run Template named "Rev. 1001 Tests" (for example), selecting Empty as the template base.

  4. Set the Select Test Cases field to LiveDoc on create, and in the Document drop-down box specify the branched Document created in Step 2 above. (Just start typing a word in the document name and select it from the options that appear.) Save changes to the template and click Manage Templates to return to the Test Runs page.

  5. Create a new Test Run based on the new Test Run Template you just created, edit the properties if necessary, and save it.

    When this Test Run is executed, the Test Cases will be those from the branched Document that was branched from Revision 1001 of the specification.

Editing an Existing Test Run

After creating the new Test Run (or at some other time if something changes before the Test Run is executed), you may want to edit some Test Run properties such as the test environment or Test Run Parameters. After selecting the Test Run you want to edit:

  • If you are not already on the Test Runs page, click on Test Runs in Navigation to access it. Then, in the list of Test Runs, select the Test Run you want to edit. Optionally use the Visual Query Builder at the top of the page to filter the list of Test Runs. For example, you might query for Manual type tests based on a specific Test Run Template. (For more information on the visual query builder tool, see Searching Work Items.)

  • Selecting a Test Run in the table at the top of the page.

  • If you want to edit the Test Run's properties, click the Properties button. If any fields of a Test Run have been configured as required fields, a red asterisk appears and the field must be filled before you can save the Test Run properties.

  • If you want to modify the Test Run's page (layout, etc.) click (Actions) and select Customize Test Run Page.

Adding Attachments to a Test Run

You can attach any type of file to a Test Run, either while in the process of creating it, or later on when logging the result. For example, you might attach a file containing test result output from some external testing tool. To add or manage attachments, click (Attachments) on the Test Run toolbar to scroll the page to the Attachments field.

If your system is configured to support searching on attachment properties and content, you can search on properties of Test Run attachments. In the Test Runs topic, you can search on attachment title, attachment author ID or name, attachment file name, content, date updated, and size.

In site search (in Navigation), you can search attachments to Test Runs via full-text search.

Executing a Test Run Using Polarion

The execution of Test Cases in a Test Run has been revamped in Polarion 18.

See the Classic Test Execution View section to configure Polarion to use the old Test Execution view workflow.

Classic test execution view will become obsolete

To give current testers some time to transition to the new Test Execution View workflow, we will still support the Classic Test Execution View for a few more releases. Recommendation: Get to know the improved workflow outlined in the Executing a Test Run Using Polarion section.

Automated tests are run by the Polarion build tool according to the build configuration, so you do not need to explicitly invoke Test Run execution. This section discusses manual execution of Test Runs in Polarion. If manual testing will be done externally to Polarion, see Executing Test Runs Offline Using Excel later in this chapter.

As previously mentioned, a Test Run represents an instance of executing a set of test cases, which have been defined in Test Case type Work Items. At some point after a Test Run is created, a tester manually performs the testing steps defined in the Test Cases of the Test Run. During the actual execution, the result of each testing step is recorded, as is the outcome or "verdict" of each Test Case (passed, failed, or blocked).

Even if you use some external tool to execute Test Cases, testers can still execute a Test Run to log the results of external testing so that testing history and traceability are maintained in Polarion across the full lifecycle.

TIP

It is possible that another user could modify the Test Case or Test Steps while you have paused test execution. If this happens, the Resume Test Case dialog (if Test Case was revised) or Retest Test Case dialog (if Test Steps were revised) appears.

  • Resume Test Case warns you that you are no longer testing what was planned. For example, the Test Case was planned from HEAD revision, and the Test Case subsequently was updated while the Test Case was in progress. Or possibly the revision in which the Test Case was planned changed (e.g. frozen to a different revision in a Test Run where Test Cases are selected via the LiveDoc on Execute option. In both scenarios, after clicking the Resume Test Case button in the dialog, the Test Steps remain the same and you continue working in the revision in which you started.

  • Retest Test Case means that the Test Case (including the Test Steps) has been updated and retesting needs to be done in the new version. If you click the Retest Test Case button, the Test Case is reloaded and execution is automatically started.

To execute a manual Test Run:

  1. Open the Test Run's page by clicking on the Test Runs in Navigation, and selecting the Test Run in the table of Test Runs. You can use the visual query builder on the Test Runs page to filter a large number of Test Runs. For example, you might query for only Manual type tests, based on a specific Test Run Template.

  2. The Test Run's details appear below the list.

  3. Click on the green Execute Test button.

  4. A table listing the Test Cases currently associated with the Test Run appears.

    (The Outline Number column appears if the document referred to by the Test Run has Outline Numbering turned on.)

    Buttons

    • refreshes the table and/or Test Case. (The top refreshes both.)

      When a Test Case is modified (executed/restarted/or paused), the columns and the Test Run statuses ("Waiting", "Executed" etc.) are refreshed automatically.

    • Next skips items that can't be executed and jumps directly to the next executable Test Case.

    • cycle through all items, including those that can't be executed (e.g Headers).

    • launches a table of your queried results where you can add or remove Test Cases, if the Test Run type allows it.

    • Define Auto Advance and Auto Start the next Test Case Execution Settings.

  5. (Optional) Click then Tile Panes Vertically to display the table and selected Test Case side by side.

    (Or Tile Panes Horizontally to return to the default, table over the Test Case, view.)

  6. Use the query builder above the table to search for individual Test Cases or sort by things like Severity.

    (You cannot set a default query, but you can easily save them for quick reuse.)

    You can also quickly narrow down what Test Cases appear by clicking on a Status or Test Case result.

    (Click on a selected status or result a second time to remove the filter.)

  7. If a Test Run item is selected in the table that cannot be executed (Headings, other Work Items or Test Cases) an informational message appears below the table.

  8. The table also supports iterations.

    • Iterations are shown as multiple rows in the tree view.

    • Only iterations matching the query/filter are shown.

    • Child items are shown under the last (displayed) iteration of the parent. (If the parent has them.)

    • If no iteration of a parent matches the current query/filter, but some children do, the parent is shown as a row representing the Test Case, but is not bound to any iteration or Test Record. (It does not show any status, etc.)

    Note:

    Adding iterations directly into the execution view is not possible. They can only be added prior to execution using the Select Test Cases command.

  9. Right-click on a column to customize what columns are visible or hidden.

    Set Polarion to Remember your Table Settings

    If you add or remove columns by right clicking and adding or removing a , or sort columns by clicking on them, your settings will be lost if the browser is refreshed.

    (For a persistent setup, click More, make your changes in the Customize Table dialog and click Apply.)

  10. Click More in the column context menu to launch the Customize Table dialog. There you can completely customize what columns are visible, how they are ordered and sorted, and even whether or not to display icons. (In the columns that support them.)

    • When you add a custom field ID, use the following syntax: testCase.fieldID

      (Just the fieldID is fine for the Work Item table but not the Test Run table.)

    • When configuring sorting, the Test Record fields must precede any Test Case fields in significance.

    • The Execute Test button widget supports a Sort Test Cases by parameter (for Test Runs that are not defined by a LiveDoc), but it has no effect when using execution view. (It is used in the Classic Test Execution View.) Instead administrators should define their desired default configuration here in the Customize Table dialog. (The default sorting configuration is ascending by Test Case ID.)

    • Administrators can also set repository wide Test Record table settings by editing the testrecord-table-settings.xml in the repository.

      (repo/.polarion/testing/configuration/testrecord-table-settings.xml)

  11. (Optional) Send a custom link to what you are testing to another user.

    1. Click on a Test Case in the table.

    2. Hover over the Test Case's title below the table and click on the when it appears.

    3. Click Copy Link to this Test Record.

    4. The browser link is copied to your clipboard.

    Browser Link Properties

    • URL parameters also support iterations. (For example, in record=projectID/testCaseId$2, the $2 is the iteration number.)

    • The link can also be opened in a Baseline.

      Or combined with a Test Run revision to show the corresponding Test Record for a Test Case retested at a later date.

    • The target Test Record appears highlighted in the table, even when the filter settings are set to hide it.

    • If the target Test Record cannot be found in the Test Run, a warning message appears.

    • The record's parameter remains in the URL until the selection is changed by the user.

    • Click on the selected Test Case or Heading's title when it appears below the table to open it in a new browser tab.

  12. Click Next to jump to the next available Test Case in the table.

  13. Click Start to execute the selected Test Case.

    • If the selected Test Case was already started but Paused, Start will be replaced with Resume.

    • If the Test Case has been run but needs to be run again, a Retest button appears.

    • If the button is disabled, hover over it for a tooltip that explains why it is.

  14. The Test Case's Test Steps appear below and the timer begins or resumes.

    • Click Save as Paused to pause and save Test Step without a verdict.

    • You can enter and format rich text using the Document Editor toolbar and even add images directly into the test step result field.

    • Click Add Attachment to attach a reference file.

    • Hover over an already attached file and click to remove it.

    Limited Rich Text Editing for Offline Test Execution

    Offline test execution does not support the following rich text editor features:

    • Images inside the text. (Like the example above.)

    • Hyperlinks within the text.

    • Tables within cells.

    • Bullet lists.

    These limitations also apply to the Excel Round-trip feature.

  15. Click on a Test Step Verdict (Passed,Failed or Blocked) and continue on to the next test step.

  16. (Optional) Click Recent to select from a list of previously entered Test Step or Test Case verdict comments.

    The Recent menu displays comments from any Test Run (by any user), that contains the selected Test Step or Verdict.

    (Comments from a previous iteration in the same Test Run are also included.)

    Note

    The selectable comments support rich text formatting and offer an image preview, but only insert a text placeholder for the images.

  17. Below the final step you'll see a timer that displays the total Duration of the entire Test Case.

    A timer is also visible at the top of the Test Case, just below the table.

  18. Click on a verdict button and click Save as [Result].

    (The [Result] will change according to the verdict you selected.)

  19. If the project was created using any of the following templates, Then a customizable verdict banner appears below the Test Cases title:

    (Templates: Agile, E-Library, V-Model, Drive Pilot, V-Model QA, Drive Pilot QA.)

    TIP

    Administrators can also configure what appears in the section above the Test Steps.

    See AdministrationTestingTest Execution Form for details.

  20. With the default configuration, any Test Case Saved as Failed will automatically create a Defect Work Item.

    Click on the defect in the table to launch it in a new browser window.

  21. You can manually proceed to the next Test Case by clicking Next under the test execution table.

Test Execution and Baselines

You can view Test Runs executed in a previous Baseline but you cannot execute them.

However, the Waiting button can be used to see the last Test Cases that were waiting at that point in the history and open the Test Run's execution view.

(Only for LiveReport based Test Runs created with the execution view implemented in Polarion 18.)

Empty Test Step Values

Empty Test Step values are omitted to save space and increase readability.

Auto Advance to and Auto Start the next Test Case

You can edit Test Run Execution Settings so that Polarion automatically jumps to, and even starts the next Test Case.

To Edit the Execution Settings

  1. Click at the top of the active Test Case and select Execution Settings.

  2. The Execution Settings dialog appears.

  3. Check or uncheck the automatic execution options of your choice and click Save.

  4. Your selection is saved and tied to your personal user preferences so that they are the same for all of your Test Runs.

Executing Test Cases Without Steps

If your project is not configured for Test Steps, or if a Test Case does not contain multiple steps in a Test Steps table, then the panel in the lower section of a Test Case contains only the Test Case Verdict field and buttons to mark the overall Test Case verdict (Passed, Failed, or Blocked).

You can still enter and format a comment in the Document Editor toolbar, add images directly into the test step result field, and Add Attachments.

Figure 20.8. Test Case Without Steps

Test Case Without Steps

Log only verdict for Test Case as a whole

If the Test Case does not have individual Test Steps:

  1. Click the Start Test Case.

  2. Perform all the necessary testing activity to determine the result of the Test Case.

Execute or Monitor Multiple Test Runs Simultaneously

Multiple Test Runs can be run simultaneously, even when they contain the same Test Cases.

  1. Launch two browser tabs with different Polarion Test Runs.

  2. The Test Run's name is displayed at the top of the table and the browser URL will include the selected Test Run.

Signing Executed Test Cases

Projects may be configured to require testers to electronically sign executed Test Cases. If your project has been so configured, then when you mark the Test Case Verdict, a dialog displays requesting you to sign by entering your username and password. See also: Administrator's Guide: Signatures for Test Case Execution.

Reuse of Defect or Issue Items

By default, any unresolved Defect (or equivalent type Work Item) from any Iteration in the current Test Run, or from any iterations in the previous Test Run (or the previous Test Run in the same group) is reused for failures of the same Test Case if the Test Case failed in the most recent execution.

In Test Runs that use Test Cases from different projects, this works differently. A Defect item can be reused only if it is linked to a test record in a Test Run in the same project as the current Test Run. This also applies to Failed in previous and Failed in group reuse strategies. For example, suppose "Project A" uses defect reuse strategy always. You execute a Test Case in a Test Run in this project which fails. The previous Test Run in which the same Test Case was executed is in "Project B", this previous execution failed, creating a Defect item that is not resolved yet. That defect, in "Project B", is not reused. Rather, new one is created in the project specified in the testing configuration in "Project A".

Show an Executed Test for a Finished Test Run

  1. Select a completed Test Run from the table of available Test Runs.

  2. Click then Test Run Records or scroll down to the bottom of the page and click Browse All Test Run Records.

  3. The Test Run's Test Record list appears.

  4. Click on a Test Case to view its details.

    (Test Cases can not be retested in a finished Test Run.)

Classic Test Execution View

You can switch back to the classic execution view method described below by entering the following property in the Properties section.

testManagement.useOldExecutionView=true 

Classic test execution view will become obsolete

To give current testers some time to transition to the new Test Execution View workflow, we will still support the Classic Test Execution View for a few more releases. Recommendation: Get to know the improved workflow outlined in the Executing a Test Run Using Polarion section.

To execute a manual Test Run with the classic Test Execution View:

  1. Open the Test Run's page by clicking on the Test Runs in Navigation, and selecting the Test Run in the table of Test Runs. If there are many Test Runs, you can use the visual query builder on the Test Runs page to build a query to filter the table. For example, you might query for only Manual type tests, based on a specific Test Run Template.

  2. Click the Execute Test button. The Work Items table opens in a new browser tab, with the Test Cases selected by the Test Run configuration listed in the table.

    Figure 20.9. Launching a manual test

    Launching a manual test

    Launch manual test execution on the Test Run page

  3. Select the first Test Case in the table. It's detail loads in the lower half of the page.

  4. Scroll down to the Execute Test section. If your project is configured for Test Steps support, this section contains the Test Steps from the Test Steps table in the Description field, and you will be able to log the results of each individual test step.

  5. When you are ready to begin executing the test, click the Start Test Case button.

    While executing the Test Steps for a Test Case, you may find it necessary to pause the test execution and resume testing later. After you start executing Test Steps the Pause Test Case button appears in the Test Steps. When you pause the test execution, the results of completed Test Steps are stored and the pause is recorded in the Test Run history. To resume testing, click the Resume Test Case button.

TIP

It is possible that another user could modify the Test Case or Test Steps while you have paused test execution. If this happens, the Resume Test Case dialog (if Test Case was revised) or Retest Test Case dialog (if Test Steps were revised) appears.

  • Resume Test Case warns you that you are no longer testing what was planned. For example, the Test Case was planned from HEAD revision, and the Test Case subsequently was updated while the Test Case was in progress. Or possibly the revision in which the Test Case was planned changed (e.g. frozen to a different revision in a Test Run where Test Cases are selected via the LiveDoc on Execute option. In both scenarios, after clicking the Resume Test Case button in the dialog, the Test Steps remain the same and you continue working in the revision in which you started.

  • Retest Test Case means that the Test Case (including the Test Steps) has been updated and retesting needs to be done in the new version. If you click the Retest Test Case button, the Test Case is reloaded and execution is automatically started.

Executing Test Steps

If the Execute Test panel has multiple Test Steps:

  1. Click the Start Test Case.

  2. Execute the first test step, note the result of the step in the Actual Result field, and click the button that indicates the result of the step: Passed, Failed, or Blocked. Note that you can select from a list or previous results for the current step by clicking on Recent.

    You can optionally attach a file (a screenshot image, for example) to the test step. An Add Attachment button appears when you click on the Actual Result field. For more information, see the User Reference topic: Test Step Attachments.

    Figure 20.10. Test Case with Test Steps

    Test Case with Test Steps

    Log results of individual Test Steps and overall Test Case result

    Caution

    If the Test Case's Description field contains a table containing Test Steps, while it's possible to merge the cells in the table this not recommended.

    (It may render the Test Steps unusable.)

  3. Repeat the above for all Test Steps listed in the section.

  4. If there are multiple Iterations of the Test Steps, continue testing until you have completed all steps in all Iterations, or until something fails.

  5. Optionally enter a comment in the Test Case Verdict field and click on the button marking the overall Test Case result: Passed, Failed, or Blocked. If your project has been set up to require it, you will be asked to log your electronic signature before proceeding further. A record of the signed test execution is then visible in test records, displayed in Test Cases, Test Runs, testing report pages, etc.

    Figure 20.11. Test Case Execution Record

    Test Case Execution Record

    Test Case execution records show who signed

    If there are untested Test Cases in the table, and the Open next queued Test when finished option is selected, the next Test Case in the Test Run is automatically selected and ready for testing to begin on that Test Case. As long as the option is checked, this will happen until all Test Cases in the Test Run have been executed. In Test Cases with Iterations in the steps, the process will proceed through the Iterations before moving on to the next Test Case. The statistics in the Test Run Planning sidebar update to reflect the progress of the testing session.

    Note that it is not required to mark results for any of the Test Steps. You can mark all, some, or none. You need only log result of the Test Case: Passed, Failed or Blocked. However, Actual Result of every step is still written to the test record even though the test step has no verdict.

Note

The Navigation scope can change depending on the setting in the Test Run's Project Span property. If multiple projects are specified in that setting, the scope switches to Repository. If just one project is specified, the scope switches to that project. If Project Span is empty, the scope remains in the current project.

To return to the Test Run after selecting Test Cases, regardless of current Navigation scope, click on the Test Run name in the Test Run Planning sidebar. If not already open, the project containing the Test Run opens and the Test Run page loads in the browser.

Executing Test Cases Without Steps

If your project is not configured for Test Steps, or if a Test Case does not contain multiple steps in a Test Steps table, then the panel in the Execute Test section of a Test Case contains only the Test Case Verdict field and buttons to mark the overall Test Case verdict (Passed, Failed, or Blocked). You can also add attachments to the Test Case Verdict using the Add Attachment link in the panel.

Figure 20.12. Test Case Without Steps

Test Case Without Steps

Log only verdict for Test Case as a whole

If the Execute Test panel does not have individual Test Steps:

  1. Click the Start Test Case.

  2. Perform all the necessary testing activity to determine the result of the Test Case.

  3. Optionally enter a comment in the Test Case Verdict field and click on the button marking the overall testing result: Passed, Failed, or Blocked. If there are untested Test Cases in the table, and the Skip to the next Test Case after execution option is selected, the next Test Case in the Test Run is automatically selected and ready for testing to begin on that Test Case. As long as the option is checked, this will happen until all Test Cases in the Test Run have been executed.

Executing an Unplanned Test Case

It is possible to execute a Test Case that has not been selected in a Test Run (i.e. has not been planned for execution in a Test Run). When the Test Case item is open and selected in the Work Items table, you can select a Test Run via the Current Test Run menu in the Test Steps panel of the Execute Tests section of the Test Case. When you click the Start Test Case button, if the Test Case has not been planned in the selected Test Run, a confirmation dialog appears informing you that the Test Case is not planned in the Test Run, and asking you to confirm that you want to execute the Test Case. You can click the Execute button to proceed, or the Cancel button to cancel the execution.

Execute or Monitor Multiple Test Runs Simultaneously.

Multiple Test Runs can be run simultaneously, even when they contain the same Test Cases.

  1. Launch two browser tabs with different Polarion Test Runs.

    (The Test Run Planning Sidebar and browser URL will display the selected Test Run.)

  2. The URL parameter, (containing the name of the originally selected Test Run), will determine which Test Run is selected in the Current Test Run drop-down box when one of the Test Cases is opened or refreshed.

Signing Executed Test Cases

Projects may be configured to require testers to electronically sign executed Test Cases. If your project has been so configured, then when you mark the Test Case Verdict, a dialog displays requesting you to sign by entering your username and password. See also: Administrator's Guide: Signatures for Test Case Execution.

Reuse of Defect or Issue Items

By default, any unresolved Defect (or equivalent type Work Item) from any Iteration in the current Test Run, or from any iterations in the previous Test Run (or the previous Test Run in the same group) is reused for failures of the same Test Case if the Test Case failed in the most recent execution.

In Test Runs that use Test Cases from different projects, this works differently. A Defect item can be reused only if it is linked to a test record in a Test Run in the same project as the current Test Run. This also applies to Failed in previous and Failed in group reuse strategies. For example, suppose "Project A" uses defect reuse strategy always. You execute a Test Case in a Test Run in this project which fails. The previous Test Run in which the same Test Case was executed is in "Project B", this previous execution failed, creating a Defect item that is not resolved yet. That defect, in "Project B", is not reused. Rather, new one is created in the project specified in the testing configuration in "Project A".

Show an Executed Test for a Finished Test Run

  1. Select a finished Test Run from the list of available Test Runs.

  2. Click on the Passed, Failed or Executed button in the Test Run Status section.

    The Test Run's results appear in a list.

  3. Click Show in table.

  4. The Test Run's details appear.

TIP

If you know the target Test Run you can also enter it directly as a search parameter.

(The results are read-only. Test Cases can not be retested in a finished Test Run.)

Figure 20.13. View Executed Tests for a Finished Test Run

View Executed Tests for a Finished Test Run

Executing a Test Run Offline Using Excel

You have the option to export Test Runs to Microsoft Excel, manually execute test cases and associated Test Steps offline/externally, and then import the results into Polarion via the Excel file previously exported, automatically creating test records in the system for all exported Test Case items. This approach can be useful if you use external testers who do not have access to your Polarion system.

You can also use the same method to manually record the results of externally-run automated tests in an exported Excel file, importing the results back to Polarion. However, if you use automated testing tools, you may prefer to set up automated import of automated test results into Polarion. See Integrating with External Testing Tools for information on this.

Before You Begin

On export, all Test Cases selected for the test run are exported, including any Test Cases from other projects selected via the Project Span field of the Test Run or the underlying Test Run Template. On import, Test Records are created for all the previously exported Test Cases. If Test Case selection spans across different projects, the following limitations currently apply:

  • The Excel export template must be stored in the project that contains the Test Run, or in the Repository scope.

  • If the Test Run selects Test Cases from multiple projects (selected via the Project Span field of the Test Run or the underlying Test Run Template) then all the projects containing Test Cases and the project containing the Test Run must have the same Test Steps columns defined, and the same ID for the Test Steps field of all Test Case type Work Items selected. (This assumes that the Test Steps field of Test Cases is exported to Excel.)

These limitations may be removed in a future Polarion release.

Exporting Test Cases to Excel

Begin the external testing process by exporting the Test Run to be executed externally to a Microsoft Excel workbook file.

  1. Open the project containing the Test Run you want to execute externally.

  2. In Navigation, select Test Runs. On the Test Runs page, select the Test Run you want to manually execute (or for which you want to manually record results of external automated tests).

    All relevant Test Cases should already be selected. If not, select them as described in Selecting (Planning) Test Cases.

  3. In the Test Run detail, click the Export Tests to Excel link and save the exported Excel file to your local file system. The default file name is the name of the Test Run. You can optionally save with a different name without affecting subsequent re-import of test results.

You can send the exported file to someone else who will actually perform the manual tests (or manually record the results of externally-run automated tests). The tester can execute each of the Test Cases of the Test Run in whatever way that is normally done, performing the steps by hand, or running some tool and logging the result of each Test Case in the Excel file. When manually executing tests, as each Test Case is executed, the tester logs the result of each test step in the Step Verdict column of the Excel sheet for each Test Case and records the overall result of each Test Case in the Test Verdict column. Testers can optionally add or change the content of the Comment column. (These columns, and those for logging results, are the only ones external users can edit. All others are protected.)

Importing Test Results from Excel

After all tests for the Test Run have been performed and their results entered and saved in the Excel file, the tester returns the file to the Polarion user who exported the Test Run (or any user having the same permissions), who then imports the results file back to Polarion so that the results can be recorded and tracked.

  • If you make any changes to the Excel file before importing, be sure all changes are saved before initiating the import.

  • Open the same project from which the Test Run was exported, if you do not already have it open.

  • On the Test Runs page of Navigation, select the Test Run from which the Test Cases were exported to Excel.

  • In the lower pane of the Test Run page, click the Import Results from Excel link.

  • In the Import Results from Excel dialog, click the Browse button and select the previously exported Excel file on your local file system.

    If you check the Retest previously executed Test Cases option, test records in the Excel file will overwrite any test records that may already exist in Polarion. If unchecked, any existing test records in Polarion are preserved and those in the Excel file are ignored.

  • Click the Preview button. If changes are detected, a preview appears. Click the Import button to complete the import process. If no changes are detected, the dialog informs you and you can click Cancel.

    TIP

    If your project is configured to require an electronic signature for Test Case execution, you are prompted to provide your e-signature. A signature must be provided for the process to finish. Nothing is imported if you cancel the import without providing the requested signature.

When the process is finished, the dialog informs you of the result - successful import, or import failure. Assuming success, it displays statistics about the number of Test Cases executed and the number that passed. In every case, there is a link to the import log file which displays the process log in a new browser instance.

A successful import creates a new Test Run record. You can access records via the (Actions) menu button in the header of the Test Run detail pane on the Test Runs page.

Note

Adding new Test Cases in an Excel workbook exported for external test case execution is not allowed in the exported file. If you want to use Excel to define new Test Cases, use the Import from Excel feature to define new test cases in Excel and import them into a Polarion project.

Test Export Templates

Polarion provides a default template for exporting Test Cases to Microsoft Excel for external testing. It should be sufficient for most situations. However, administrators can download the default template file which can then be used as the basis for one or more custom export templates. One customization that may be desirable when exporting Test Runs that select Test Cases from multiple projects is to show the source Project of Test Cases in a dedicated column.

An administrator can upload customized template files, which users can subsequently select as templates for exporting Test Runs to Excel. For information, see Administrator's Guide: Configuring Testing: Configuring Test Export Templates.

Signing a Test Run

Polarion can support your formal testing review and sign-off process for Test Runs, using secure electronic signatures compliant with U.S. 21 CFR Part 11. A Polarion administrator can map your review and sign-off process to the Test Run Workflow configuration. Each step in the process can optionally be set up to require an electronic signature before a Test Run can move to the next step in the workflow.

To be able to sign a Test Run you must have permission to READ and permission to MODIFY Test Runs. Signing a Test Run takes place when you invoke any status change that is configured in the project workflow as requiring a signature. For example, setting the status to Passed might be so configured.

To sign a Test Run:

  1. Open Test Runs topic (Navigation > Test Runs) and select the desired Test Run.

  2. On the toolbar of the detail pane, click Properties.

  3. Go to the Status field and select the desired transition action. The actions in the list can vary from project to project depending on the project's workflow configuration. For example, one project might have an action "Perform action Mark as Passed" while another project might have an action "Mark Executed Without Failures".

  4. On the pane toolbar, click Save. If the selected action is configured to require a signature, the Enter User Name and Password dialog appears with the Username field focused.

  5. Enter your Polarion user name and press Enter or click Next. The Password field appears.

  6. Enter your password press Enter or click Sign.

Figure 20.14. Electronic Signature Dialog

Electronic Signature Dialog

TIPS

  • You can show Test Run signature information in Pages using the Test Run Signatures widget.

  • In the rare case where logging a signature conflicts with concurrent changes to the Test Run made by another user, Polarion displays a dialog with information and options for proceeding. You can review the other user's changes before saving yours.

Adding Comments to a Test Run

The Properties page of a Test Run contains a section for comments. When signing a Test Run, or making other changes, you can add a comment. Test Run comments work in the same way as comments in Work Items. Other users can reply to a comment, there can be multiple comment threads, and comments can be marked as Resolved.

On the Test Runs page, users can search for a Test Run by comments, by typing a query containing fields comments.title, comments.text, comments.author.id, or comments.resolved, or via full-text search.

Example: status:failed AND comments.title:scheduled AND comments.author.id:rproject

Commenting Test Runs

Test Runs support comments and comment threads. If you have the required user permissions (assigned by an administrator to your user role in your project), you can:

  • Add new comments to a Test Run (requires permission to comment Test Runs).

  • Reply to existing comments (requires permission to comment Test Runs).

  • Resolve comments or comment threads (requires permission to resolve comments in Test Runs).

Note that permission to read Test Runs is also required for all of the above.

To access Test Run comments:

  1. In Navigation, select Test Runs.

  2. On the Test Runs page, select the Test Run you wish to view.

  3. In the Test Run detail, click Properties button, and then the (Scroll to Comments) icon.

Deleting Old Test Runs Manually

At some point you may decide you don't need to preserve all the Test Runs that have been executed. This is especially likely if you have integrated automated test execution with Polarion's building feature, creating new Test Runs with every build. You can delete one or more Test Runs from the Test Runs topic. You must have permissions for this action, otherwise you will see a message and the delete operation will be canceled.

To delete one or more Test Run(s):

  1. Select the Test Runs topic in Navigation.

  2. In the table of existing Test Runs, select the Test Run(s) you want to delete. The page provides an instance of the Visual Query Builder that you can use to visually build a query to filter the list. This instance of the tool is limited to a set of query elements applicable for querying for Test Runs, and does save queries for later reuse. (A visual query built with this tool can be converted to text for use in reports, etc.)

    To select multiple Test Runs for deletion, check the box on the row of each Test Run to be deleted. To select all Test Runs appearing in the list, check the box in the header of the Status column.

  3. If you are deleting a single Test Run, go to the lower pane, click (Actions) and choose Delete.

    If you selected multiple Test Runs to delete, a button labeled Delete [N] selected Test Runs appears in the lower pane of the Test Runs page (where [N] is the number of Test Runs selected in the upper pane). Click the button to delete all the selected Test Runs.

Automated Cleanup of Old Test Runs

If you have many Test Runs as a result of automated testing, it may not be practical to delete old Test Runs manually. In this case, Polarion provides an automated job that will delete Test Runs you don't need to keep. For information, see the Administrator's Guide topic: Configuring Cleanup of Old Test Runs.

Preserving Test Runs in History

Every Test Run has a field "Keep in History", accessible in the Test Run's properties. When checked, the Test Run will be preserved in Baselines. Before starting automated cleanup of Test Runs, you should mark those Test Runs that should be preserved in history for future reference. For example, you might keep all Test Runs for verification tests that were executed at some project milestone. Project managers should be aware of when the automated cleanup job is scheduled to run, and should mark any new Test Runs that should be preserved in history before the next scheduled Test Run cleanup.

Marking is simple when there are only a few Test Runs and only one or two to be marked. You can browse your project's Test Runs and mark each Test Run individually in its Properties. If there are many Test Runs in the system, and/or many to be flagged for preservation in history, then you will need to use bulk marking of Test Runs with the Keep in History flag.

To bulk mark multiple Test Runs:

  1. In Navigation, open the Test Runs topic.

  2. On the Test Runs page, in the table of Test Runs, check the box on the row of every listed Test Run that you want marked with the "Keep in History" flag.

    If there are many Test Runs, you can run a query to filter the table. If your query returns only Test Runs that should be marked, you can mark all Test Runs in the table by checking the box in the table header.

  3. Once you have selected the Test Runs in the table, then in the selection panel in the lower portion of the page, click the Keep in History button to set the "Keep in History: flag on all selected Test Runs. (The selection panel appears when more than one Test Run is selected in the table.)

Reviewing Test Execution Records

Polarion maintains a record of all instances of the execution of a Test Case during any Test Run which includes the Test Case. You can review the most recent records for any Test Case in the Test Records section of the Work Item Editor when a Test Case is selected. You can view the past test execution records up to the limit set in the system configuration, selecting the period in the drop-down list provided. The Test Records section shows the results of the reported test executions, including Iterations of Test Steps and provides links to each of the Test Runs. The current content of the Test Records section is included if the Test Case is sent to Print. If the record table contains additional details, its content is expandable.

Each Test Record is linked to the revision of the test case that was executed during the testing session that resulted in the creation of the record.

Figure 20.15. Test Record

Test Record

Viewing Test Run History

To view the history of a Test Run:

  1. With the relevant project open, click Test Runs in Navigation, and in the table of Test Runs, select a Test Run by clicking on its row. The detail of the Test Run loads in the lower part of the page.

  2. On the toolbar of the Test Run detail, click the (History) icon. A table of the historical revisions to the test run replaces the Test Run detail information.

You can access the detail of any revision by clicking on the revision number. To exit the history click the History icon again.