Keeping – Software testing lean

Scrum Master Qualities
June 4, 2018
Kanban Board
June 18, 2018

Keeping – Software testing lean

Top 7 challenges – Application Development

Which of the following technical challenges are you experiencing in developing applications today?

image2018-5-4_11-32-45.png

 

Failure & Historical view

Reasons for Project Failure?

  • Incomplete requirements
  • Didn’t involve users
  • Changing requirements
  • Unrealistic expectations
  • Poor planning
  • Insufficient resources/schedule
  • Lack of managerial support
  • Didn’t need it any longer

What went wrong histroically?

Software quality needs to increase level of quality being deployed into production, technical team has qualified four contributing factors impacting application quality.

  • Testable Requirements:   Inadequate level of analysis to ensure application quality, stability, efficiency and speed. I.e. lack of quality relevance of information within Functional Spec & interface specs for downstream, lack of use cases, frequent changes in requirements even if application is in UAT phase. Resulting in  i) inadequate test coverage, ii)  defect slippage (SIT à UAT à Prod) & iii) Increased cost of defect fixes
  • Toll Gating Mechanism:  Inadequate rigor around Entry & Exit criteria’s in each phase of test such as i) Limited control over entry criteria for SIT & UAT, ii) Parallel test execution for SIT & UAT resulting in poor quality of software application & iii) Scanty quality SDM deliverables
  • Communication: Lack of proper/defined communication, resulting in i) delay in start of test design activities, ii) improper test coverage due to last minute changes (which were not communicated) & iii) no time for test planning  (which impacts test effort)
  • Test Environment: Lack of test environment stability & test data, and ‘fit – to- purpose’ set up. Resulting in i) inadequate usage/controlled test environment (integrated with downstream), ii) incomplete test execution based on coverage defined, iii) late introduction of defects in application life cycle & iv) maximizes the  opportunity of issues to be leaked to production.

Differencial Approach

What needs to be done differently?

  • Test Requirements: Re-engineer requirements specifications to infuse model based testing and lower the chances of defect leakage in UAT & production

E.g. – Incorporate ‘Walkthrough – Review & Approval’ process for all requirements specifications
E.g. –  Control changing requirements across life cycle
E.g. – Infuse ‘shift left’ mechanism, by crafting appropriate use case models

  • Process: Re-engineer and redefine the current end-to-end test process and discipline.

E.g. – SIT test design to commence well in advance before test execution commence, proper timeline must be define for test design – scenarios must be crafted with functional specs, interface specs and Use Cases
E.g. – Remove unrealistic expectations, poor planning, uplift test methodology and test approach

  • Gating: Precision surrounding “ETVOX – Entry Test Validation Output Exit” criteria spanning Unit >> SIT >> UAT.

E.g. – Review & approval of process (and ownership) surrounding gating mechanism and test exit criteria.

  • Assets:  Review existing assets, test data & test environment.

E.g. – Controlled test environment for both SIT & UAT

 

Process Improvement

image2018-5-4_11-40-48.png

image2018-5-4_11-41-6.png

 

ETVOX

Entry Test Validation Output Exit – Criteria

Entry:

  • Overall Project Plan & JIRA project created
  • Availability of test resource
  • Requirements walkthrough to test team by BA
  • Signed off Business Requirements (BRDs, FS, Use cases, Interface documents) from BA
  • Reviewed test scope & cases for SIT by BA
  • Development plan (to track outstanding functionalities)
  • Unit test result or Code coverage dashboard
  • Test environment readiness & availability of test data (SIT code drop releases notes are available)
  • Test design (cases / scenarios) completion, walkthrough provided to BA team
  • Sanity test completed
  • Any exceptions made to the above entry criteria must need stakeholders buy in  (b/w email)

Exit:

  • Completion of agreed scope of work for SIT
  • Completion of test execution (100%  as defined)
  • 90% of PASS rate has been achieved in SIT
  • ZERO ‘Critical / High’ Open defect @ end of SIT
  • No more than x% of Medium & x% of Low Open defects @ end of SIT
  • Fix description for an issue to be replicated either in JIRA or Release notes  before closure
  • Any outstanding risks & issues must be well documented
  • Any exceptions made to the above exit criteria must need stakeholders buy in  (b/w email)
  • Retrospection of SIT @ the end of test
  • Document key learnings  @ the end of test

Output:

  1. Test Plan, Traceability Matrix, Test Cases/scenarios
  2. Test Results & Test Report
  3. Issues & Risk Logs
  4. Defects encountered list (OPEN, CLOSED)

 

Roadmap & Implement

Roadmap – what can be achieved, need to be defined based on your proejct need.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *