1 Introduction
1.1 General
Functional Testing is the process where a piece of software-in our case a perfSONAR service- is tested in such as manner as to verify that it fulfils a set of predefined functionalities regarding that particular piece of software.
For the purpose of functional testing each tested service is considered to be a black box and the tester does not interfere with the code or internal structure of the service. The tester picks up the role of the client meaning that it feeds the service with requests and receives responses as a client would normally do.
The tester is responsible to determine if the service is behaving according to existing specifications and in order to do that she/he must feed the service with a set of requests that will help her/him reveal any problems and inconsistencies. The tester has to process the received responses and determine if they are consistent to available documentation and if any information inside the response is correct. Extending a little bit the definition of Functional Testing the tester must also make sure that the service is backwards compatible with previous versions as required by the Release Management team and also determine if the service is interoperable with other services which will also be defined by the Release Management team
1.2 Importance of Functional Testing
Functional Testing of perfSONAR web services is very important and crucial for the project. It assures that quality software is supplied to the end users and that the software actually performs as described and defined in documentation.
Functional testing is also useful for detecting flaws and vulnerabilities of perfSONAR services that can be exploited by malicious users, thus helping improving the overall security of the services. It can also confirm that perfSONAR services are interoperable and backwards compatible. Such problems can be detected during the Testing Process and fixed before getting to the user. This helps confidence among users that the delivered software will work with no problems, helping them focusing on setting up and configuring the service.
1.3 Functional Testing Process
Functional Testing begins as soon as the first Release Candidates (RCs) of the services are ready by the developers. This happens right after the code freeze period where no changes are allowed in the code of each service. Then the developers produce a RC, which includes the service code, documentation and the installation scripts. The Release Management team is then responsible to make sure that the code, documentation and installation scripts meet the project standards. Having this in mind the Release Management passes the RCs to the Testing Team for functional Testing.
Ideally the testers should have developed their testing scripts by the time the first RCs are ready. When a RC is available for testing the tester that was assigned to test that particular RC should install the service and apply hers/his testing scripts upon the service. The results of the testing procedure should be reported (A template document will be available) directly to the developer of the service and to the Release Management Team. If any problems have been detected, they should also be reported through Bugzilla. The Release Management will then decide if a new RC fixing these problems should be build. If a new RC is to be build, the tester should apply hers/his tests on the new RC and also report its findings. This process will be repeated until the Releases Management decides that there is no need for a new RC and that the service has reached a satisfactory quality level.
2 Building Test Cases
2.1 General
Ideally a tester would submit to a service the whole range of possible requests. Put this is considered to be infeasible mainly for three reasons:
· Very difficult to think all of the requests.
· Its time consuming.
· Requires a lot of effort.
Having this on mind, it is necessary to define a subset of all possible requests which will fulfil our testing purposes. This subset can be determined with the use of two major instruments:
· Documentation.
· Gained Testing experience.
Documentation is provided now with every perfSONAR service and it’s mandatory for all developers. Also the gained experience by testers can be a useful tool for building test cases.
2.2 Using Documentation
Documentation is a very important source in building test cases. The two most helpful documents are the Interface Specification Document and the Change Log.
2.2.1 Interface Specification Document
The interface specification document contains information about the supported interfaces (Requests) of the service, their functionality, the interfaces request and response descriptions in Relax-NG format and also examples of requests and responses for the supported interfaces.
Examples inside the interface specification document are usually the first source of test cases for a tester since they give a quick introduction to the service functionalities and they also contain responses back from the services giving a first idea on how the service works and what is expected to be returned.
Inside the interface specification document one can also find a description of the functionality of every interface supported by the service. In this description one can find what that particular interface can do, what are the mandatory elements in a request and what the user can expect from the service. It’s clear that the most important test cases can be derived from this description regarding each interface’s functionality, mandatory elements and attributes and also the expected response.
Also in an appendix the Interface Specification document contains the Relax-NG schema descriptions for the supported interfaces requests and responses. Based on these descriptions test cases can be built testing the robustness of the services and the correlation between the Relax-NG description and the real service implementation. These descriptions, because of the luck of a proper protocol specification, are currently considered to be the “protocol” of perfSONAR services and can be used to fill in the gaps left by the interface descriptions. They are also very useful in validating responses from the service in conjunction with our testing tools. A good Relax-NG syntax tutorial can be found here http://relaxng.org/compact-tutorial-20030326.html .
2.2.2 Change Log
The Change log, also contained in every service release is also a very import guide in building test cases. The change log provides information about the changes that happened regarding the service functionality, the service interfaces and the services set up over time. Having this I mind one should remember that the tester also must make sure that the services she/he is testing should have backwards compatibility as it would be defined by the Release Management Team. The tester acting as client must make sure that the service can support older versions of schema or interface functionality as the Release Management has defined, thus assuring that clients following older specifications can communicate sufficiently with the service.
A set of test cases assuring just that can be build based on the Change Log document and the Release Management backwards compatibility requirements. Examples of such backward compatibility issues could involve namespaces, event types, result codes etc.
2.3 Using Gained Experience
Testing experience so far can be a valuable asset for building test cases for services. An experienced tester usually knows the general structure of a perfSONAR service request and can create test cases based on that. Such test cases can be:
· Testing the type attribute of the request message
· Metadata and data elements existence in the request
· The connection between metadata elements and data elements through the metadataIdRef attribute
· The ability of the service to respond to malformed or invalid requests with a result code rather than a soap fault
2.4 Grouping Test cases
Test cases can be separated to three different types, based on the kind of requests they are using:
· Well formed requests with valid data
This group of tests uses requests that are formed as the service would have expected them, and contain valid data. In this case the service responds normally returning the requested information. The purpose of these cases is to determine whether the service will function as specified under normal conditions, also test some more advanced methods as chaining and finally check backwards compatibility.
· Malformed requests
This group of tests involves sending the service several requests that are missing important parts of the request message, as link and interface elements, event type, type attribute etc. The service is expected in most of the cases to respond with an error or warning result code, but in some cases it might respond back normally based on a default configuration. These tests are conducted in order to determine the robustness of the service and to make sure that the service follows the specifications as they are documented in the Interface Specification document.
· Well formed requests with invalid and near valid data
This group of tests is focused in feeding the service with well formed requests that can be parsed normally by the service, but contains invalid and near valid data. The wrong data will force the service to respond with an error or warning result code. These tests aim at testing the ability of the service to operate as it’s defined by documentation boundaries.
3 Validation and Verification
Although a big part of the testing methodology is to determine the test cases, another big part of testing is the validation of the responses produced by the service. It is important to establish that the responses from the service have the correct form as described in documentation because that’s the form that the potential users of the service would expect. It is also important to check that the data returned from the service are the expected and correct; either they were retrieved through the metadata configuration file or from some database. Based on the latter we can differentiate the processing of the responses:
· Checking that the response is not a soap fault
Each service needs to be robust enough. If the service for some reason is not able to process properly a request then it must respond with a result code describing the problem and not with a soap fault (Failing gracefully)
· Validating the responses
The responses need to be validated in order to confirm that the response has the right form.
· Verifying returned “metadata”
Services often return information about links interfaces or keys. These information need to be verified by the tester in order to verify that the service works as it’s suppose to work.
· Verifying return data
Services return data that are either stored inside a database or file, or are the result of a requested measurement. These data need to be verified and cross checked against the data inside the databases and measurements made using directly the measuring tools.
4 Other Issues
4.1 Testing Tools
Up until recently testers wrote their tests using any language they wished such as Perl or Java. For interoperability and portability reasons the Release Management team has decided to confine testers to the use of only one testing tool, soapUI.
An introduction to soapUI can be found in Appendix A and also a lot of information in using soapUI and its scripting language, Groovy, can be found in soapUI’s home page http://www.soapui.org . soapUI uses a graphical interface for creating test cases and has a set of assertions for validating and verifying responses, along with the possibility of using a groovy script for more elaborate tests. A tutorial on soapUI can be found here http://wiki.perfsonar.net/jra1-wiki/images/c/c0/SoapUITutorial_new.ppt . A first comer to soapUI should go through the latter resources before continuing building tests in soapUI.
NOTE: When using soapUI to build test cases and test steps, try to be as descriptive as possible when choosing names for the test cases and test steps.
4.2 Using soapUI
As mentioned before soapUI uses a variety of tools for validation and assertion building.
· Validation assertions
A very useful tool of soapUI is the schema validation assertion. Unfortunately soapUI validator does not support Relax-NG schema, making hard and sometimes impossible for testers to use it in order to validate responses. Fortunately a work around has been found by using a suitable validator with in a groovy script. A guide on how to use that can be found in Appendix B
· Not soap fault assertions
A very useful tool for determining if the service is failing gracefully; the use of this assertion is highly recommended.
· XPath assertions
This kind of assertion allows the user to parse the response using standard XPath expressions and functions. The user can check the assertion instantly on the response. The use of this tool is recommended because it can also detect structural changes to the response message and also namespace changes apart from retrieving data.
· Groovy script assertions
For more elaborate programming tasks a groovy script can do the trick. Groovy is derived directly from Java so all Java classes for accessing databases and files can be used without a problem.
· Simple contains assertions
It simply checks if a specified string is contained in the response. The use of this kind of assertion is not recommended because it cannot detect changes in schema or namespace that are important to end users.
· Schema validation assertion
soapUI has a schema validation functionality which unfortunately does not support Relax-NG syntax. In the past we had to convert our rnc files in xsd so they could be used in soapUI via a WSDL description. This was very time and resource consuming since the transition to xsd was not flawless. To fix this problem we are currently using a java validator that supports Relax-NG through a Groovy script (Appendix B)
soapUI can be run also in command line mode and it can also produce html reports with the help of Ant. These features and modifications can be found on the soapUI project web site.
5 Appendix A
This is guide that was firstly written by Jochen Reinward and included in his soapUI tests for CL MP. I’ve modified some bits here and there to bring it up to date.
Introduction
This document gives an introduction on using soapUI for functional testing of web services. It was designed especially for NWMG/perfSONAR services.
The sections "Starting soapUI" and "Running tests" are a quick intro showing how to use soapUI with an already created project for doing preconfigured functional testing. The following paragraphs describe how to configure functional tests with soapUI.
The only files required for creating a soapUI project from scratch are: The RELAX NG Compact Syntax (RNC) schema files, the Web Services Description Language (WSDL) files and some example XML requests.
If you already have a configured soapUI project as a soapUI project XML file, you normally don't need the examples anymore, since they are include in the project file. NOT in included in the project file are the schema and WSDL files. You need XML Schema (XSD) files for soapUI, not the RNC schema files.
You need some tools to accomplish the following steps:
· soapUI: soapUI is, of course, always needed.
· trang: trang is used for converting RNC schemas to XML Schema (XSD).
· Java: Both soapUI and trang need Java.
Starting soapUI
Installing and starting soapUI should be quite simple:
· Download from http://www.soapui.org/ (e.g. version 1.6)
· Extract and execute soapui-1.6/bin/soapui.sh .
There is also a soapui.bat...
You should change to the working directory (the directory containing the project file(s) and the schema files) before starting soapUI and start soapUI from this directory.
Running tests
After starting soapUI, you can import (File -> Import Project) a project file and start exploring soapUI. Or you (re)create a project on your own by skipping this section and reading the following sections.
Important is that soapUI is able to find the WSDL files since they are NOT included in the soapUI project file. Best practice here is to put the WSDL files in the same directory as the project file and load them in soapUI using the URL file:./<WSDL-FILE> .This way you can copy/extract this directory everywhere you want and the WSDL files should be found.
The same is valid for the schema files (XSD) that are referenced from the WSDL files. Additionally it is useful to use a subdirectory for the schema files since XSD schemas often consist of more than one file. A reference in a WSDL file might look like this :<includes schemaLocation="subdir/schema-mainfile.xsd"/>
Instead of using the UI (user interface) you can use the CLI (command line
Interface) for just starting the tests:
> <SOAPUI-PATH>/bin/testrunner.sh <PROJECT-FILE>
For more information on using soapUI take a look at the user guide: http://www.soapui.org/userguide/.
For the CLI switches see: http://www.soapui.org/userguide/commandline/functional.html .
Also information about "JUnit Integration" and "JUnit Reporting" can be found there.
Creating WSDL files
A typical perfSONAR service (Release 2.0) has separate RNC files for requests and responses. Both files can NOT be loaded from one WSDL, because both have root elements with the same name. This will most likely change in newer versions of perfSONAR.
At the moment this fact makes it necessary to create separate WSDL files for requests and responses. As starting point we assume to have:
· service-req.rnc - The RNC schema for a request
· service-res.rnc - The RNC schema for a response
Since soapUI can not handle RNC schemas, we have to convert them toXML Schema (XSD) using trang:
> mkdir service-req service-res
> trang service-req.rnc service-req/service-req.xsd
> trang service-res.rnc service-res/service-res.xsd
Note: We created subdirectories as recommended above.
Now you have to create an appropriate WSDL file for the service including these schemas. This is not covered here at the moment! Look for an appropriate example to start...
Important Note: Now it’s only necessary to create a WSDL and XSD files only for the request. Schema validation for the responses can be handled easier by the use of a Java Validator, more in Appendix B.
Setting up a soapUI project
First add a new project:
File -> New WSDL Project
Choose an appropriate project name (best is the service name, e.g. BWCTL MP) and save the project file in the working directory where the schema files are located (see above). Now you can add a "service" by adding the WSDL file (service-req.wsdl):
<Context menu of project> -> Add WSDL from URL
In the box use file:./service-req.wsdl as URL (see above). This avoids absolute paths and a lot of related problems! For the same reason please don't use "Add WSDL from File"!
Note: This may not work in a windows environment and the use of absolute paths may unavoidable.
You are now asked, if you want to add an example request for the interface. Just do it and also "Create optional elements in schema" if you like. Now youcan see what soapUI can do for you!
Important: The RNC schemas are far to simple at the moment. The better and accurate the schema is, the better the automatically created request is filled. Keep in mind: The created requests can never be absolutely perfect!
From within the popped up request window you can send the request to the service you see in the drop down menu in the upper tool bar by pressing the green button left in the bar. The template request soapUI created is, as said above, most likely not a correct request for the service!
Best is to find or create some example request for the service. Just copy and paste (use Ctrl-V in soapUI) its contents (without the XML header <?xml...>!!) between <soapenv:Body> and </soapenv:Body> in the request window. Now it should be possible to submit the request and the response will be displayed in the right window.
IMPORTANT: Check the service URL you are contacting in the pull down menu in the top tool bar.
Creating tests
First create a test suite:
<Context menu service> -> New TestSuite
Then a new test case within the test suite:
<Context menu of test suite> -> New TestCase
You can (obviously) organise your tests in suites and cases. Now you can double click on "Test Steps" to see the editor for the steps. But for adding a real test you have to go back to a formerly created request. Click right on it and choose "Add to TestCase" from the context menu. Now choose the test case (or create a new one). Give the test a name and also add the Schema Assertion and the SOAP Fault Assertion. The test will now be added as a step of the test case and appear in the editor. A new window similar to the request window earlier should also appear.
You can now push the green button and the request will be submitted and the tests will be applied.
Further testing
You can change the tests and add new ones without sending the request again. The tests will be applied to the old response until you explicitly request anew one. You can even edit the response in the right window and run the testson this modified response.
Not very much testing so far? Right click in the area with the green circles and choose "Add Assertion" from the context menu. Choose "XPath Match" from the drop down menu and press OK. Now you can enter an XPath expression in the upper text input field. But first you should press "Declare" to automatically import namespace definitions from the response for your convenience. After the "declare" statements you can add your own code. With "Select from current" the XPath expression will be evaluated and the result displayed in the lower window. With "Test" a boolean test is done like the testing procedure will do it for the actual test. You want to know what XPath is? You can find a good introduction here: http://en.wikipedia.org/wiki/XPath .
A typical test for a perfSONAR service may look like:
/soap:Envelope/soap:Body/nmwg:message/nmwg:data/nmwg:datum[1]/@numBytes >= 0
This example tests whether the first result of a BWCTL run from the BWCTL MP is a positive number. But what about the other results in the response? The following expression counts all results not positive. Of course there should be none for a successful test:
count(/soap:Envelope/soap:Body/nmwg:message/nmwg:data/nmwg:datum[@numBytes < 0]) = 0
Or, since you can even use XPath 2.0, which has a lot of additional features:
every $numBytes in
/soap:Envelope/soap:Body/nmwg:message/nmwg:data/nmwg:datum/@numBytes
satisfies ($numBytes >= 0).
This is a small guide for using a Java validator with in a groovy script for the purpose of using schema validation on responses in soapUI.
Actions
1. To make life easier please install soapUI 1.7.5 or later. Don’t worry; tests that were developed in older versions will still work. You can get it from here: http://www.soapui.org/ .
2. Download the zip file including the java validator libraries from here: http://weblogs.java.net/blog/kohsuke/archive/20060210/rng-validation.zip/rng-validation.zip
3. Unzip the file anywhere you like.
4. Copy the jar files from the lib directory to the ext folder of the directory you have installed soapUI 1.7.5
5. Then you must transform the rnc files into rng files by using trang trang response.rnc response rng will do the trick.
6. Now on the project you are working on create a new script assertion and copy the following code to the script window:
import java.io.File;
import java.io.OutputStream;
import java.io.StringWriter;
import javax.xml.transform.*;
import javax.xml.transform.stream.StreamResult;
import javax.xml.transform.stream.StreamSource;
import javax.xml.transform.dom.DOMSource;
import javax.xml.XMLConstants
import javax.xml.validation.SchemaFactory
import java.io.StringWriter
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def holder = groovyUtils.getXmlHolder(
messageExchange.getResponseContent());
def factory = SchemaFactory.newInstance(XMLConstants.RELAXNG_NS_URI);
def schema = factory.newSchema(new File("C:/Documents and
Settings/IBM/My Documents/JavaRRDMAsoapUI/SetupDataResponse.rng"));
def validator = schema.newValidator();
def node = holder.getDomNode("//nmwg:message")
validator.validate(new DOMSource(node));
You only need to change the path to the rng file. If the schema validation fails the validator will throw an Exception mentioning what went wrong.
No comments:
Post a Comment