Test Manager TM helps in organizing & controlling the testing process by tracking the artifacts like Test cases , Defects & Execution results in one single common location for many Clients/Projects and it can be accessible over the distributed Environment
Centralized Repository :
Test Manager TM maintains all the artifacts in a centralized repository which gives one single view on the Test cases & its Execution Summary , where as in our traditional methods we find we difficult to maintain when we have people in the team.
We can maintain requirements and track the requirement coverage by mapping the test cases with the requirement.
This helps in Identifying & tracking the uncovered/missed requirements so that we can add new test cases or update our existing test cases and we cover it as part of testing.
We can also track the review comments for the test cases during the review process.
Consolidated Summary & Report
Test Manager TM helps in maintaining the test cases Release/Sprint wise and it provide the Summary & Coverage report for the Test cases.
we can easily track the daily/monthly Test Execution & Defect Summary.
It allows the user to categories the test cases based on the Priority,Component,features and to select the test case for different test suites like sanity, Regression suite.
We can assign the test cases to a tester & test results are track with defect ids for the failed cases.
Current Market demands for development & testing teams to work in distributed environments and also on the same features. Additionally, software complexities are growing which leads to increased number of test cases. The release cycles are shrinking to deliver the product under the popular Agile Methodology.
Test Manager TM is a cost-effective solution which fits in for both agile and traditional projects
- Centralized repository for all test cases and test results
- Test cases are organized in a hierarchical structure and traceable
- Feature coverage can be prepared by linking requirements and test cases
- Multiple projects can be maintained
- Users have defined roles
- Reporting and test metrics
- Multiple reports and charts are supported [Module/Iteration/Sprint wise report multiple build execution report ]
- Requirements-based testing
Why Performance Testing?
The Purpose of Performance Testing is to ensure the System is meeting the SLA which is part of NON Functional Requirement. NON Functional Requirement is the one which describes that how a System should work.
This is used to identify the bottlenecks and tune the System to meet the SLA and it also validates the scalablity, Resource Usage & reliability of the System.Performance Testing are conducted in a regression manner where minor changes are being applied on weekly or daily Test.
How to do Performance Testing?
- Identify the In Scope & Out Scope Items for the Test
- Identify the Type & mode of communication for the Performance Testing ie types like Initial & Delta Load & modes like Services[RMI,REST,WEB SERVICES],DB Dump,etc
- Identify the following NON Functional Requirements:
- Transaction volume for Initial & Delta load over a period of time (200k/week)
- Expected Transactions per Second (TPS) & # of concurrent users
- Acceptable Response time for Inquiry Services
- List of frequently used Scenarios & Business use cases
- The Test Plan document has been reviewed
- Test Scenarios and cases are created and reviewed
- Test scripts and/or data have been created
- Required infrastructure is in place
- Functionally certified code should get deployed for Test
Data Set Selection:
- Data Volume for Initial & delta load should get selected based on the requirements.[we can choose 1M records/Source as Initial Load & 200K to 300K records as delta load on daily/weekly basis]
- We can also categories the data loads based on the business use cases or most frequent used scenarios or feature wise
- Scenarios 1: Party Matching / Related with Multiple Other Parties [P1 has 1000+ relationship with Other parties with type employees ]
- Scenarios 2: Load with 70% Updates & 30% add Parties and also we can also try with more combination to identify the bottlenecks in various flow of code
- Prepare of Data Set
- Build Verification
- Sufficient warm up
- This phase needs to be long enough to allow the workload to reach a stable steady state (a nearly flat throughput curve)
- The best way to determine a suitable warm-up period is to do a set of long runs and look for a repeatable inflection point where throughput has risen to a flat or nearly flat line
- Execute actual load scenarios with typical load conditions. Start resource monitors
- Monitor and capture performance test results
- Analyse the test results and prepare test metrics
- Execute the test again as required
Tuning & Optimal Performance of the System:
- Baseline Test or Creating Benchmark for the System
- We can execute the test for more no of iteration by tuning with different parameter values to arrive the optimal performance of the System
- Parameters like # of Threads,File size,SQL Optimization,System (CPU & Memory) utilisation,etc
- After arriving the Optimal Performance we do a run to verify the Performance of the System is stable with same parameters
Determining the Submitter/Thread value:
- Determine the Submitter if not available, If there are N number of physical CPUs available to System, then the recommended number of submitter is 2N to 3N (For example, if there are four physical CPUs for the System, then use 8 to 12 submitter)
Other NON Functional Testing:
Load testing is type non functional testing to check the System behaviour with incremental load to reach its threshold value.this test is conducted by increasing the # of user & iterations. it helps in identifying the maximum # request or load the system can handle under the given time.this can be achieved by monitoring the CPU,Memory utilisation & Bandwidth response time.
It’s kind of negative testing where we send large number of concurrent user & process to the System which cannot be handled by the System.
This is a subset of Stress testing where we send peak loads to the system for a period of time and monitor the behaviour of the System.
This testing is used to ensure the efficiency of the System by send a huge/bulk volume of data to the System and monitoring the System