The command line tool

With TestArchitect's command line tool, tests can be launched from a command shell. When incorporated into batch files, the tool greatly extends the flexibility of test execution, especially from the standpoint of scheduling.

You are probably already familiar with the typical means by which automated tests are invoked, a method we call online execution. You create a test, or set of tests, open the Execute Test dialog box to configure any options, then run the tests from there.

The command line tool allows you to instead execute tests from your host operating system's command line interface. By itself, this capability may not be very useful. But TestArchitect also exploits this feature by enabling you to generate batch files that run the command line tool, configured to execute your tests. These batch files may in turn be launched automatically from other programs, for example as part of a nightly build process.

Using the command line tool

The TestArchitect command line tool is a Java program. When executed, TACommandLine must be followed by a set of arguments, some of which are required, while others are optional.

Note that, in most contexts, you will want to use a batch file to invoke the command line tool. Moreover, you will use the Execute Test dialog box of TestArchitect Client to automatically generate the batch file for you, along with all the arguments. Hence the information in this topic is, for the most part, for reference only, such as when you need to read an existing batch file. However, there also may be times when you will want to edit a batch file by hand.

TACommandLine

A typical TACommandLine to invoke the execution of a single TestArchitect test module might appear as follows:
:TAExecute0
@echo off 
title TestArchitect - Command Line Tool 
ta execute -ls "lgvn14254:14101" -rep "SampleRepository" -prj "Car Rental" -u "administrator" -p "048484B545D" 
-t "/Action-based Testing Basics/User Interface Tests/UI Elements" -rs "lgvn14740.logigear.com:53400" 
-c "localhost:53600" -kwd "spanish" -lv "Browser:Internet Explorer" -rev "08/09/2017 14:40:32.694+0700" 
-ss "case sensitive=bis=no;language=uds=English" -cc "Passed;Failed;Warning/Error" -cl "3" 
-r "UI Elements" -x "D:\results" -html "D:\results" -subfld "1" -subhtml "1" -htmlscrn "1" 
-tares "D:\results" -subtares "1" -taresscrn "1" -xsl "C:\Program Files\LogiGear\TestArchitect\templates\xsl" 
-up "/Car Rental/Results/{today}" -upc "Failed,Passed,Passed with Warnings/Errors,Passed with known bug" 
-tscript "{INSTALL_DIR}/BinClient/taplayback.exe" -tpath "{INSTALL_DIR}/binclient/taplayback.exe" 
exit

Synopsis

ta execute [-ver] [-?] -ls <machine name:port> -rep <repository name> -prj <project name> 
-u <username> -p <password> -t <test path> [-tcs <test case>] 
[-rls <machine name:port>] [-rs <machine name:port>] [-c <machine name:port>]
[-d <device id>] [-dc <json file>] [-co][-kwd <keywords variation>] [-lv <system variation>] 
[-udf <Field Name1> <Value1> <Field Name2> <Value2>...] [-rev <timestamp>]
[-ss <startup settings>] [-cc <filter conditions>] [-cl <value>]
[-r <result name>] [-cmt <comment>] [-x <xUnitresult>] [-xml <xmlresult>] 
[-tares <TAresult>] [subtares <0|1>] [taresscrn <0|1>]
[-html <htmlresult>] [-subfld <0|1>] [-xsl <path>] [-subhtml <0|1>] [-htmlscrn <0|1|2>] 
[-up <upload to repository>] [-upc <upload conditions>] 
[-upe <upload external tool>] [-w <wiql query>] [-upec <upload conditions>] [-upet <upload file type>] 
[-tscript <toolscript>] [-tcmd <toolcmd>] [-tpath <toolpath>] [-dl <delay time>]
execute
Executes automated tests from the command-line interface
-ver
(Optional) Displays the version of the TA command line tool
-?
(Optional) Displays this help message
-ls <machine name:port>
Machine name (IP address) and port number of the license server, or the trial key mode
Options:
  • Default port number: 14101
  • Port number range: 14101-14199
Example:
  • -ls "localhost:14101"
  • Trial key mode: -ls "trialkey:abc12-xyz12-dxed123"
-rep <repository name>
Name or ID of the repository hosting the project
This ID is one that TestArchitect generates for repositories. It can be viewed in the Repository Properties dialog box of the repository, available from the context menu in the TestArchitect explorer tree.
Example: -rep "Car Rental"
-prj <project name>
Name or ID of the project
This ID is generated by TestArchitect. It can be viewed in the Project Properties dialog box of the project, available from the context menu in the TestArchitect explorer tree.
Example: -prj "Car Rental"
-u <username>
User name to log in to the repository
-p <password>
Password (encrypted) associated with the user name
To obtain the encrypted password, you are required generate batch files.
-t <test path>
Full path or ID of the test suite or the test module to be executed.
Example:
  • Test module: -t "/Action-based Testing Basics/Action Based Testing"
  • Test suite: -t "Test Suites/Functional Tests"
-tcs <test case>
(Optional) Test module sections, that is, initial, final, and specified test case, or ID of the test module to be executed.


Test cases are delimited by a semicolon (;).
Example: -tcs "initial;tc01;final"
-rls <machine name:port>
(Optional) Machine name (IP address) and port number of a redundant license server
Options:
  • Default port number: 14101
  • Port number range: 14101-14199
Example: -rls "localhost:14101"
-rs <machine name:port>
(Optional) Machine name (IP address) and port number of the repository server
Options:
  • Default port number: 53400
  • Port number range: 53400-53499
Example: -rs "localhost:53400"
-c <machine name:port>
(Optional) Machine name (IP address) and port number of the TestArchitect controller that is to run the test
Specified in the Controllers/Devices panel of the Execute Test dialog box (Select Controllers and Devices button).

Options:
  • Default port number: 53600
  • Port number range: 53600-53699
Example: -c "localhost:53600"
-d <device id>
(Optional) The device(s) ID upon which the test is to be executed. (Semicolon-delimited).
Specified in the Controllers/Devices panel of the Execute Test dialog box (Select Controllers and Devices button).


-dc <JSON file>
(Optional) The JSON file which contains desired capabilities about the target cloud devices.
Specified in the Controllers/Devices panel of the Execute Test dialog box (Select Controllers and Devices button). ( Learn more.)


-kwd <keywords variation>
(Optional) Keyword variation(s) to apply to the test execution (semicolon-delimited).
Specified within the Keyword box in the Execute Test dialog box. (Learn more.)


Example: -kwd "English;China"
-lv <system variation>
(Optional) Linked variation(s) to apply to the test execution (semicolon-delimited).
Specified within the AUT Version box in the Execute Test dialog box. (Learn more.)


Example: -lv "Browser:Firefox; OS:Win8"
-udf <Field Name1> <Value1> <Field Name2> <Value2>..."
(Optional) Field-value pairs for any user-defined fields assigned to the Result item type, as well as the Build Number built-in field. Tab-delimited.
The Build Number value is specified within the Settings panel of the Execute Test dialog box.


Example: -udf "build number=1.1;run machine=lgvn111"
-rev <timestamp>
(Optional) The specific historical snapshot, based on a given revision timestamp, of the project items invoked by the test run.
Specified within the Time Traveling box in the Execute Test dialog box. (Learn more.)


Example: -rev "03-21-2015 15:00:48.456+0700"
-ss "<Setting Name>=<Setting Type>=<Setting Value>"
(Optional) Defines the list of user-defined settings or reconfigured built-in settings. (Separate multiple settings with semi-colons.)
Specified within the Startup Settings tab in the Execute Test dialog box.


Options of Setting Type:
  • bis: built-in setting
  • uds: user-defined setting
Example: -ss "object wait=bis=1;page wait=bis=234"
-cc <filter conditions>
(Optional) Specifies which types of test outcome events are to have associated screenshots captured and logged during the automated test.
Specified within the Screenshot Recording panel of the Execute Test dialog box. Learn more.)


Options: (separate multiple conditions with semicolons)
  • Passed: Passed events
  • Failed: Failed events
  • WE: Passed events with Warnings/Errors
Example: -cc "Passed;Failed;WE"
-cl <value>
(Optional) Specifies the number of UI-interacting actions preceding each event specified in -capturecond for which the screenshots are to be retained and logged to the results. (Default: screenshots of all UI-interacting actions are captured and logged.)
Specified within the Screenshot Recording panel of the Execute Test dialog box. (Learn more.)


Example: -cl "3"
-r <result name>
(Optional) Name for the test result.
Specified within the Result Name box of the Execute Test dialog box.


Example: -r "UI Elements"
-cmt <comment>
(Optional) Comment related to the test.
Specified within the Comment box of the Execute Test dialog box.


-x <xUnitresult>
(Optional) Location to which to store the test result report in xUnit form.
Specified within the Export Result(s) to xUnit panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Example: -x "D:\results"
-xml <xmlresult>
(Optional) Location to which to store the test result report in XML form.
Specified within the Export Result(s) to XML Detail panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Example: -xml "D:\results"
-tares <TAresult>
(Optional) Location to which to store the test result report in .TARESULT form.
Specified within the Export Result(s) to TARESULT panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more)


Example: -tares "D:\results"
-subtares <0|1 >
(Optional) The master result and its subresults are all exported into .TARESULT files.
Specified within the Export Result(s) to TARESULT panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Options:
  • 0: (Default) Do not export the master result and its subresults
  • 1: Export the master result and its subresults
-taresscrn <0|1 >
(Optional) Keeps captured screenshots when exporting test results to .TARESULT files.
In the Execute Test dialog box, this parameter corresponds to the Export Result(s) to TARESULT panel of the Advanced Settings tab. (Learn more.)


Options:
  • 0: (Default) Do not export captured screenshots.
  • 1: Export screenshots.
-html <htmlresult>
(Optional) Location and filename to which to store the test result report in HTML form.
Specified within the Export Result(s) to HTML panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Example: -html "D:\results"
-xsl <path>
(Optional) Location to store the customized XSLT template applied for HTML results.
Specified within the Apply customized XSLT template panel. (Learn more.)


Example: -xls "C:\Program Files(x86)\LogiGear\TestArchitect\templates\xsl"
-subfld <0|1>
(Optional) Whether a flat structure directory or a folder structure directory is employed to store HTML results.


Specified within the Export Result(s) to HTML panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)
Options:
  • 0: Flat structure, that is, TestArchitect does not create subdirectories. There is only a single top-level directory which contains all HTML results.
  • 1: (Default) Folder structure, that is, TestArchitect creates a hierarchical tree structure, or subdirectories to store HTML results.
Example: -subfld "1"
-subhtml <0|1>
(Optional) The master result and its subresults are all exported into HTML files.
Specified within the Export Result(s) to HTML panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Options:
  • 0: (Default) Do not export the master result and its subresults
  • 1: Export the master result and its subresults
Example: -subhtml "1"
-htmlscrn <0|1|2>
(Optional) Keeps captured screenshots when exporting test results to HTML.
In the Execute Test dialog box, this parameter corresponds to the Export Result(s) to HTML panel of the Advanced Settings tab. (Learn more.)


Options:
  • 0: (Default) Do not export captured screenshots.
  • 1: Screenshots are exported as thumbnail images.
  • 2: Screenshots are exported in their original, full size resolution.
Example: -htmlscrn "1"
-up <upload to repository>
(Optional) Path to the repository result folder to which the test result is to be stored.
Specified within the Automatically add result(s) to repository panel under the Advanced Settings tab of the Execute Test dialog box. (Learn more.)


Example: -up "/Car Rental/Results"
-upc <upload conditions>
(Optional) Specifies which types of test results are to be automatically stored to the repository after execution completes.


For detailed information on how to have TestArchitect store test results automatically based on pre-defined conditions, refer to Adding test results automatically.
Options: (separate multiple conditions with commas, terminate the string with a semicolon):
  • Passed: Passed test results
  • Failed: Failed test results
  • WE: Passed test results with Warnings/Errors
  • KB: Passed test results with known bug
Example: -upc "Passed,Failed,WE,KB"
-upe <upload external tool>
(Optional) Consists of a destination path plus other configuration options for the currently configured external test tool, such as, Team Foundation Server-Microsoft Team Manger (TFS-MTM), or HP Quality Center.
Note:
  • Please contact LogiGear Support if you need details of how to configure this parameter by hand.
  • When running a batch file that uploads to a given external tool, you must ensure that the host repository is currently configured for that same tool.


-w <wiql query>
(Optional) Specifies Work Item Query Language used to query for test points on TFS.
This parameter only takes effect with TFS-MTM integration. (Learn more.)


Example: -w "SELECT * FROM TestPoint WHERE PlanID=3160 AND RecursiveSuiteID=3189 AND ConfigurationId=20"
-upec <upload conditions>
(Optional) Specify which associated Team Foundation Server (TFS) test cases are to receive links to the attached test result. This determination is based, for each given test case, on its result in terms of its TFS outcome.
This parameter only takes effect with TFS-MTM integration. (Learn more.)


Options: (separate multiple conditions with commas)
  • Passed: Attach the TA test result file if the test case's TFS outcome is Passed.
  • Inconclusive: Attach the TA test result file if the test case's TFS outcome is Inconclusive.
  • Failed: Attach the TA test result file if the test case's TFS outcome is Failed.
Example: -upec "Failed,Inconclusive,Passed"
-upet <upload file type>
(Optional) The format of the test results uploaded to the external test tool, when the automated tests are run through TestArchitect.
This parameter only takes effect with TFS-MTM integration. (Learn more.)


Options:
  • html: HTML format
  • zip: ZIP format
  • <max html file size>: TestArchitect uploads the result as an HTML file if the file's size does not exceed the specified limit. If the limit is exceeded, it is compressed and uploaded as a ZIP file.
Example: -upet "HTML"
-tscript <toolscript>
(Optional) Script's path for the test playback tool.
Specified in the Automation Tools dialog box.


-tcmd <toolcmd>
(Optional) Execution command line to run test automation with a customized harness program.
Specified in the Automation Tools dialog box.


-tpath <toolpath>
(Optional) The executable application's path to run the test.
Specified in the Automation Tools dialog box.


-dl <Delay Time>
(Optional) Delay time between actions.
-compileonly
(Optional) Change the generated batch file to only compile tests and not run the tests when the batch file executes.
Note: -compileonly cannot be specified from the Execute Test dialog box. You may find it useful to add this flag to an existing batch file, to switch the command from execution to compilation.