Monday, 21 January 2013

Methodology of Test Effort Estimation


est Estimation in Software Testing industry is similar to time management that we do in our day to day lives. In order to understand what test effort estimation is, let us understand the term estimation first.

Estimation is the intelligent anticipation of the amount of work that must be done and the resources (human, financial resources, equipment resources and time resources) needed to perform the work at a future date, in a defined environment, using specific methods.

Most of you who have never done Test estimation before must have actually done it though you haven't realized it.

Let's consider an example for this. You want to attend your office at 9:00. So you estimate the time it would take you to reach office say, 45 minutes. And you take 15 minutes for your morning walk, 30 minutes to get ready & 20 minutes for breakfast. So to reach office on time you would need to wake up no later than 7:10AM.

Now let us understand test effort estimation.

Test estimation is the process of estimating the testing cost, testing effort, testing size & schedule of testing for a particular project on software testing under a specific environment, with the help of specified methods, testing tools & test techniques.

Software Testing estimation is important as it is directly linked with the project cost & deadline. Estimation is based on:

# Documents/Knowledge available: Requirement Specification Document, Domain Knowledge, Tool Understanding

# Assumptions: Requirement Doc is complete, builds will be stable etc.

# Calculated Risks: Manpower cost, lack of system understanding, backup resources available or not etc.

# Past Experience: Past experience or historical data

There are different standard & non-standard methods of doing test estimation. Many Managers/Leads are not comfortable in doing estimation as it is a time consuming activity. So they follow a non-standard way of estimation based on their past experience. But, if they are asked to work on some new technology/domain then it is difficult for them to do test estimation.

Methods of software test estimation:

1) Function Point Analysis / Test Point Analysis:
FPA is an ISO recognized method. Measure the functional size of an information system / application. Size reflects the amount of functionality from the functional or user, point of view. Independent of the technology used to implement the system and is totally dependent on the SRS (Specification Requirement Document)

Formula to calculate FP:

FP = UFP * VAF

Where UFP – Sum of complexities of basic functions - Internal logical files (known as ILF), External interface files (known as EIF), External inputs (known as EI), External outputs (known as EO), External enquiries (known as EQ)

VAF – Value adjustment factor.

Number of the test cases = FP *1.2 (Caper Jones formula)

Test effort = Number of the test cases * (%age of development effort/100)

Drawback: Detailed SRS is required.

2) Work Breakdown Structure (WBS):
Break down each testing task (map the task with each deliverable as per requirement document) into the smallest chunk or sub-tasks. Now do the estimation for each sub task.

3) Point Estimation Technique:
It resembles with WBS estimation method, break down every testing task into sub task & then do following three estimations on each entity:

# Best Case/ Positive scenarios: Where everything goes right, P

# Worst Case/Negative scenarios: Where everything goes wrong, N

# Average Case/Exceptional scenarios: Where few things go right & few deviates from the actual plan, A

Test Effort Estimate = P + 4*N + A/6

4) Delphi Method:
It is similar to WBS estimation method; here task & sub-tasks are allocated to the team members or some experts. Then team members/experts give the estimate that how much time they will take to complete each task. And finally their estimates are consolidated to reach the final estimate for each task.

5) Use Case Points:
Use case is a document, which describes the behavior and interaction of the system as the reaction to a specific query or action of an actor. Here actor means either an end-user or the stakeholder. An interaction is initiated by the primary actor with some specific objectives in mind. The system then provides response by safeguarding the interests of all concerned actors. According to the requests made & various conditions surrounding those requests, system behavior/ flow can be opened up. Test cases are based on the Use Cases.

Conclusion: Estimation is not a close-ended document, but a live document. You should keep your estimation document updated as and when you sense a change from actual happening in field. It should be reviewed continuously during course.

Tuesday, 8 January 2013

SAP Test Acceleration and Optimization (SAP TAO)

SAP Test Acceleration and Optimization (SAP TAO):

Purpose Of SAPTAO:

The highly secure and mission-critical nature of enterprise resource planning (ERP) data requires many SAP customers to test their ERP applications and business processes regularly. Many SAP customers use expensive customized manual procedures to test processes and applications. Manual testing is time-consuming, and frequently requires a team of experienced quality assurance (QA) professionals. Subject matter experts also often need to spend a lot of time communicating the process data flow to testers. 
SAP Test Acceleration and Optimization streamlines the creation and maintenance of ERP business process testing. 

SAP Test Acceleration and Optimization helps QA specialists to break down a test into components which are:

Assembled into test cases in a simple interface, using drag and drop 
Parameterized for flexible reuse, such as reusing a test that has updated data 
Maintained easily and inexpensively, even when screens, flows, or service packs change 

SAP Test Acceleration and Optimization is designed for SAP-GUI-based applications. SAP Test Acceleration and Optimization users should be experienced quality managers who are familiar with the SAP Quality Center application by HP, SAP GUI ERP applications, and business process testing.

Automatic testing with SAP Test Acceleration and Optimization maximizes: 

Testing deployment 
SAP Test Acceleration and Optimization, with SAP Quality Center, dramatically reduces the amount of time required to build and execute test scripts. 

Reuse 
SAP Test Acceleration and Optimization eliminates the need to create new tests whenever a component changes. If one component in a group of tests changes, replace that component, and re-consolidate the tests

Maintenance 
SAP Test Acceleration and Optimization records component parameters. It provides a Microsoft Excel spreadsheet to save parameters for reuse and maintenance. SAP Test Acceleration and Optimization helps you to determine the need for repairs, and helps you to repair your components.

Robustness 
The SAP Test Acceleration and Optimization inspection process ensures that SAP Test Acceleration and Optimization tests are more robust during changes. Inspection examines the parameter in a component, not just the screen object behavior.
SAP TAO

The SAP Test Acceleration and Optimization Clent application runs on Windows System. It Performs 6 Key functions 
1 .PFA (Process Flow Analyzer)
2. Inspection/UI Scanner
3. Import/Export
4. Change Analysis
5. Repository
6. Consolidator

Configuration

License Validation:
* Once Installation is complete, Double Click on the SAPTAO Icon on the Desktop. SAPTAO UI Appears. 
* Click on the "Configuration" Link on the top right of the screen and select the "License" Tab
* Select the Solution Manager system from the System List 
* Click on Test SAP Connection (The text box on the right SAP TAO License with Text Enabled turns Green
Connection Settings:

* Click on the Connect Module on the Left hand Side Panel of SAPTAO UI.
* Select the SAP Managed system from the list and provide valid credentials
* Click on "Test SAP Connection" Button and then Click on "Save" Button
* The ICON displaying the selected current Backend system turns Green
* Provide Valid QC Connection details along with Domain and project details and click on "Save" Button
* The ICON displaying the connection state of QC turns Green
Configuration Settings

Click on the Configuration Link and set the following in the respective tabs as described below.

Inspection /UI Scanner

The Inspection tab page in the SAP Test Acceleration and Optimization client selects multiple SAP GUI screens and transactions for testing, in an easy-to-use interface. It then determines whether these screens are valid, and sends them to SAP Quality Center as screen components. 

Set up Inspection

Upload components to SAP Quality Center:

To upload the screen components to SAP Quality Center If this option is not selected, the parameters Overwrite components and Delimiter for Component/Unique Identifier on the Import/Export tab page are used to name new components

Component path

The location of components created during inspection 
Duplicate components are overwritten without confirmation. 

UI Scanner: 
A plug-in module for HP Quickest Professional enables you to collect information from one screen at a time, and sends the screen objects to HP Quality Center as a component. The UI Scanner scans all objects in the screen including dynamically generated objects. The UI scanner requires an active Quick Test Professional installation on your local work station.

Import/export 
Import/export exchanges components between the SAP Test Acceleration and Optimization client and SAP Quality Center. The import/export module does the following. 

Export components from the inspector in the SAP Test Acceleration and Optimization client to SAP Quality Center 
Import components from SAP Quality Center to the SAP Test Acceleration and Optimization client 
Export components from the local memory to SAP Quality Center 
Export a component from the UI scanner and send it to SAP Quality Center, in the background 

PFA also uses import/export while creating the test components

Procedure 
1. Choose Import/Export tab page. 
2. Select the required options. 

Process flow analyzer:
The process flow analyzer (PFA) records all user interactions, and the sequence of screens, in a business process, and stores them in the SAP Test Acceleration and Optimization repository. It automates inspection, retrieval of dynamic SAP GUI properties at runtime, and creation of components. 

Procedure 
On the PFA tab page, make the following settings: 
The PFA will create a second spreadsheet to store the recorded outputs
Do Not Use Screen Components 
Whether screen components are to be inserted in the test. If selected, only default components will be used. Do not select this option unless required. 

Steps for Process Flow analysis are:- 
1. To add a transaction, choose. 
2. Select a transaction e.g. VA01. Name for analysis is already populated with time stamp concatenated to it. You may change it. 
3. Choose "Start" for Process Flow Analysis.
Execute the process flow as a business user. 
6. After executing the transaction completely, stop the PFA by clicking on "Stop the PFA" button in the PFA controller window. 
7. A report is generated which captures the activities performed by business user 

Change analysis:
Change analysis analyzes the impact of changes due to upgrades, SAP patches, or custom development, on a test, component, or consolidated component. The impact is found by comparing the results of the technical bill of materials (TBOM), transport requests and SAP patches.

Set Folder for BPCA 
In this section, you specify: 
The folder created in SAP Quality Center to store the test set created by BPCA 
Whether to use BPCA result ID as the test set name 
Change analyzer functionality helps in repairing tests, which are impacted by a software change. SAP Test Acceleration and Optimization relies on SAP Solution Manager – Business Process Change Analyzer (BPCA) result. The BPCA result Id could be searched on the basis of a solution or project

Steps for Change Analysis:- 
1. Input your result Id, and choose "Change Impact Analysis". 

Repository 
The SAP Test Acceleration and Optimization repository is part of the SAP Solution Manager system, and stores data required to create, optimize and maintain components and tests. The repository contains the following:
User interactions and the sequence of screens in a business process 
Information specific to SAP Test Acceleration and Optimization, that cannot be retrieved by other tools 
Results or states during process flow analysis, before the component is created 

The SAP Test Acceleration and Optimization repository tools comprise the following. 

Component Explorer 
The user can see the list of all the components created by himself for a specified QC Domain and QC Project.

PFA Explorer 
The user can search for PFAs created till now, depending on search strings. The user can specify property to search on. The different properties which are possible to be searched on are:- 
• User 
• Analysis Name 
• Transaction 
• System 
• Client 
• Date Time 
• Language 

Once the search is done the user can click on one of the PFA to see more technical details.

Consolidation
Consolidation creates a single component from the objects and data in an SAP Quality Center test. The component contains all the code and screen elements in a test. It executes much faster than the individual components and helps you to maintain business processes. Consolidation also takes dependent tests into account.

Friday, 4 January 2013

GUI and Usability Test Scenarios


GUI and Usability Test Scenarios


1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.

General Web Testing Scenarios

General Test Scenarios


1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like 'Select'
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user's first name should be labeled properly as 'First Name'
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

Cloud Testing: Issues and Challenges

Over the past few years, Cloud has evolved as a buzz word in most of the IT industries. Testing in the cloud has greatly reduced the cost involved especially for mobile applications. Cloud manifests itself in three forms viz. Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS). However, Software-as-a-Service and Infrastructure-as-a-Service has recently emerged as the most important software testing services.

Jerry Gao, Xiaoying Bai and Wei- Tek Tsai in a white paper titled "Cloud Testing – Issues, Challenges, Needs and Practice" published in Software Engineering: An International Journal highlights the major issues and challenges in cloud testing.

1.    On – Demand Test Environment Construction
The question that often arises is how to build a testing environment for on- demand cloud testing services which is either systematic or automatic. According to Gao et al, the current cloud technology does not have any supporting solutions that will help cloud engineers build a cost effective cloud test environment.

2.    Scalability and Performance Testing
A survey by Gao and others found that many of the published papers have discussed about performance testing and solutions; however, they only "focus on scalability evaluation metrics and frameworks for parallel and distributed systems." The current metrics, frameworks and solutions, does not support the features such as dynamic scalability, cost – models and others.

3.    On – demand Testing Issues and Challenges
Software testing services in TaaS should be controlled and managed by keeping in mind the on demand testing requests and this raises many issues and challenges such as identifying the test process for TaaS which will support on-demand automated testing, or to identify the various approaches to help engineers cope with the breakdown of test cases or test scripts.

4.    Regresson Testing Issues and Challenges
Software challenges and bug fixing brings in regression testing issues and challenges. The on – demand cloud testing services should address the various issues and challenges.