testing & implementasi
DESCRIPTION
testing & implementasi hand bookTRANSCRIPT
Testing danImplementasi Sistem
Siti Muharramah
State Transition Testing
Menggunakan model sistem yang terdiri dari :- Status yang terdapat didalam program- Transisi antar status tersebut- Kejadian yang merupakan sebab dari transisi tersebut- aksi-aksi yang akan dihasilkan
Model umum di representasikan dalam bentuk statetransision diagram
Test cases didesain untuk memeriksa validitas transisiantar status juga didesain untuk testing thd transisiyang tidak termasuk dan tidak dispesifikasikan
Contoh suatu state transition diagram untuk waktu tampilan device
Keterangan dari state transition diagram tsb adalah :
Status : displaying time (S1) Transisi : antara S1 dan S3 Kejadian yang menyebabkan transisi, seperti reset
selama status S1 akan menyebabkan transisi ke S3 Aksi yang merupakan hasil transisi : aksi display time
Test case untuk transisi yang valid
Test case ini didesain untuk memeriksa transisi-transisi yang valid, spesifikasinya adalah :
Status mulai
Masukan
Keluaran yang diharapkan
Status akhir yang diharapkan
Test case untuk transisi yang tidak valid
- Testing yang komprehensif akan mencoba untukmelakukan test thd transisi yang tidak valid
- Model transisi status yang secara eksplisitmemperlihatkan transisi tidak valid adalah tabelstatus (status table)
Use Case
Merupakan suatu sekuensial aksi yang dilakukan olehsistem
Secara bersama-sama memproduksi hasil yangdibutuhkan pengguna sistem
Mendefinisikan alur proses sepanjang sistem berbasispada kegunaan sebagaimana yang biasa dilakukan(secara manual)
Membantu menemukan error dari integrasi karenamemasukkan interaksi atau fungsi yang berbeda darisistem
Use Case
Tiap use case memiliki :
Preconditions yang harus di pertemukan agar use casesdapat bekerja dengan sukses
Postconditions : mendefiniskan dimana use casesberakhir
Flow of events : mendefinisikan akse pengguna danrespon sistem terhadap aksi yang dilakukan
Use Cases dan test cases
Use cases dan test cases akan bekerja dengan baik dalamdua cara :
Jika use cases dari sistem komplit, akurat dan jelasmaka pembuatan test cases dapat dilakukan secaralangsung
Jika use cases tidak dalam kondisi baik makapembuatan test cases akan membantu dalammelakukan debug thd test case
Persiapan Pengujian SoftwarePengujian software (software testing) membutuhkan
persiapan, sebelum pengujian dilakukan.
proses testing harus dilakukan secara sistematis,tidak bisa secara sembarang, karena softwareyang dihasilkan harus bebas dari error, untukmengurangi resiko kerugian yang akan dideritaoleh penggunanya
Produk software harus menguntungkanpenggunanya pada saat digunakan.
Persiapan Pengujian Software membuat checklist list yang akan ditest
list requirement list rancangan list spesifikasi list manual, jika sudah ada – biasanya diperlukan
untuk pengujian oleh user
Persiapan Pengujian Software pembuatan test case
merupakan elemen dasar yang harus ditesting merupakan list yang independent
pembuatan grup test case kumpulan dari beberapa testcase merupakan list yang akan memiliki status hasil test
Persiapan Pengujian Software pembuatan modul test
pembuatan skenario testing terdiri atas beberapa grup test case diasosiasikan dengan fungsionalitas modul mengacu kepada dokumen requirement dan desain/spec
program
Pembuatan package testing
Pembuatan produk testing
Pembuatan test case
Skenario Login Form
Contoh Skenario test1
Contoh Skenario test2
3
Importance of Testing in SDLC & Various Kinds of
Testing
4
Software Development Lifecycle
All software development can be characterized as a problem solving loop in which
four distinct stages are encounter: -
Status quo: “represents the current state of affairs”;
Problem definition: identifies the specific problem to be solved
Technical development: solves the problem through the application of some
technology.
Solution integration: delivers the results (e.g., documents, programs, data, new
business function, new product) to those who requested the solution in the first
Place.
5
Waterfall Model
Analysis Design Code Test
System/information engineering
6
The Prototyping Model
Listen to customer
Build/revise mock-up
Customer test-drives mock-up
7
The RAD Model
Business modeling
Data modeling
Processmodelin
g
Application modeling
Test and
turnover
Business modeling
Data modeling
Processmodeling
Application modeling
Test and turnover
Business modeling
Data modeling
Processmodelin
g
Application modeling
Test and
turnover
Team #1 Team #2 Team #3
8
Boehm’s Spiral Model
9
V- Model
SRS
Unit test
Testedmodules
Integration Test
Integratedsoftware
System Integration Test
Testedsoftware
System Test,AcceptanceTe
st
RequirementsSpecification
System Design
DetailedDesign
Coding
System Design
SRS
Moduledesigns
Code
UserManual
10
Importance of Software testing in SDLC Its helps to verify that all the software requirements are implemented
correctly or not.
Identifying defects and ensuring they are addressed before software deployment. Because if any defect will found after deployment and force to fixed it, than the correction cost will much higher than the cost of it fixed it at earlier stage of development.
Effective testing is demonstrates that software-testing function appear to be working according to specification, that behavioral and performance requirement appear to have been met.
Whenever any system is developed in different components, its helps to verify the proper integration/interaction of each component to rest of the system.
Data collection as testing is conducted provide a good indication of software reliability and some indication of software quality as a whole.
11
Different Types of Testing
Dynamic v/s static testing. Development v/s independent testing. Black v/s white box testing. Behavioral v/s structural testing. Automated v/s manual testing. Sanity, acceptance and smoke testing . Regression testing. Exploratory and monkey testing. Debugging v/s be bugging.
12
Dynamic v/s static
Static Testing: This testing refers to testing something that’s not
running-Examining and reviewing it.
Dynamic Testing: This you would normally think of as testing-
running and using the software.
13
Development v/s independent testingDevelopment testing denotes the aspects of test design and implementation
most appropriate for the team of developers to undertake. This is in contrast to Independent Testing. In most cases, test execution initially occurs with the developer testing group who designed and implemented the test, but it is a good practice for the developers to create their tests in such a way so as to make them available to independent testing groups for execution.
Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers. You can consider this distinction a superset, which includes Independent Verification & Validation. In most cases, test execution initially occurs with the independent testing group that designed and implemented the test, but the independent testers should create their tests to make them available to the developer testing groups for execution
14
Black v/s white box testing
The purpose of a black-box test is to verify the unit's specified function and observable behavior without knowledge of how the unit implements the function and behavior. Black-box tests focus and rely upon the unit's input and output.
A white-box test approach should be taken to verify a unit's internal structure. Theoretically, you should test every possible path through the code, but that is possible only in very simple units. At the very least you should exercise every decision-to-decision path (DD-path) at least once because you are then executing all statements at least once. A decision is typically an if-statement, and a DD-path is a path between two decisions.
15
Behavioral v/s structural testing
Behavioral Testing: This is another name commonly given to Black Box Testing as you are testing the behavior of the software when it’s used without knowing the internal logics how they are implemented.
Structural Testing: This is another name commonly used for white Box testing in which you can see and use the underlying structure of the code to design and run your tests.
16
Automated v/s manualAutomated Testing: Software testing assisted with software tools that
require no operator input, analysis, or evaluation.
Manual Testing: That part of software testing that requires human input, analysis, or evaluation.
17
Sanity, Acceptance and Smoke testingSanity Testing: Sanity testing is a cursory testing; it is performed whenever a
cursory testing is sufficient to prove the application is functioning according to specifications. It normally includes a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
Acceptance testing: Acceptance testing is the final test action before deploying the software. The goal of acceptance testing is to verify that the software is ready and can be used by your end users to perform those functions and tasks for which the software was built.
Smoke Testing: Smoke testing is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details.
18
Regression testingThe selective retesting of a software system that has been modified to
ensure that any bugs have been fixed and that no other previously
working functions have failed as a result of the modifications and that
newly added features have not created problems with previous versions
of the software.
Regression testing is initiated after a programmer has attempted to fix a
recognized problem or has added source code to a program that may
have inadvertently introduced errors. It is a quality control measure to
ensure that the newly modified code still complies with its specified
requirements and that unmodified code has not been affected by the
maintenance activity.
19
Exploratory and monkey testing
Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results.
Monkey testing- This is another name for "Ad Hoc Testing"; it comes from the joke that if you put 100 monkeys in a room with 100 typewriters, randomly punching keys, sooner or later they will type out a Shakespearean sonnet. So every time one of your ad hoc testers finds a new bug, you can toss him a banana. The use of monkey testing is to simulate how your customers will use your software in real time.
20
Debugging v/s bebugging
Debugging: The process of finding and removing the causes of
failures in software. The role is performed by a programmer.
Bebugging: The process of intentionally adding known faults to those
already in a computer program for the purpose of monitoring the
rate of detection and removal, and estimating the number of faults
remaining in the program
21
Black Box & White Box Testing Techniques
22
Black-Box Testing
Program viewed as a Black-box, which accepts some inputs and produces some outputs
Test cases are derived solely from the specifications, without knowledge of the internal structure of the program.
23
Functional Test-Case Design Techniques
Equivalence class partitioningBoundary value analysisCause-effect graphingError guessing
24
Equivalence Class PartitioningPartition the program input domain into equivalence classes (classes of data which according to the specifications are treated identically by the program)
The basis of this technique is that test of a representative value of each class is equivalent to a test of any other value of the same class.
Identify valid as well as invalid equivalence classes
For each equivalence class, generate a test case to exercise an input representative of that class
25
Example
Example: input condition 0 <= x <= max
valid equivalence class : 0 <= x <= max
invalid equivalence classes : x < 0, x > max
3 test cases
26
Guidelines for Identifying Equivalence Classes
Input Condition Valid Eq Classes Invalid Eq Classes
range of values one valid two inavlid(eg. 1 - 200) (value within range) (one outside each
end of range)
number N valid one valid two invalidvalues (none, more than N)
Set of input values one valid eq class oneeach handled for each value (eg. any value notdifferently by the in valid input set )program (eg. A, B, C)
27
Guidelines for Identifying Equivalence Classes
Input Condition Valid Eq Classes Invalid Eq Classes
must be condition one one(e.g. Id name must begin (e.g.. it is a letter) (e.g.. it is not a letter)with a letter )
If you know that elements in an equivalence class are not handled identically by the program, split the equivalence class into smaller equivalence classes.
28
Identifying Test Cases for Equivalence Classes
Assign a unique number to each equivalence class
Until all valid equivalence classes have been covered by test cases, write a new test case covering as many of the uncovered valid equivalence classes as possible.
Each invalid equivalence class cover by a separate test case.
29
Boundary Value Analysis
Design test cases that exercise values that lie at the boundaries of an equivalence class and for situations just beyond the ends.
Example: input condition 0 <= x <= maxTest for values : 0, max ( valid inputs)
: -1, max+1 (invalid inputs)
30
Cause Effect Graphing
A technique that aids in selecting test cases for combinations of input conditions in a systematic way.
31
Cause Effect Graphing Technique
1. Identify the causes (input conditions) and effects (output conditions) of the program under test.
2. For each effect, identify the causes that can produce that effect. Draw a Cause-Effect Graph.
3. Generate a test case for each combination of input conditions that make some effect to be true.
32
Example
Consider a program with the following: Input conditions Output conditions
c1: command is credit e1: print invalid commandc2: command is debit e2: print invalid A/Cc3: A/C is valid e3: print debit amount
not validc4: Transaction amount not e4: debit A/C
valid e5: credit A/C
33
Example: Cause-Effect Graph
C1
C2
C3
C4
E1
E2
E3
E5
E4andand
or
and
and
and
and
and
not
not
34
and
Example: Cause-Effect Graph
C1
C2
C3
C4
E1
E2
E5
E4
E3and and
ornot
andand
andand
not
35
Example
Decision table showing the combinations of input conditions that make an effect true. (Summarized from Cause Effect Graph)Write test cases to exercise each Rule in decision Table.
Example: C1C2C3C4
00--
1-0-
-110
-111
1-11
E1E2E3E4E5
11
11
1
36
Error Guessing
From intuition and experience, enumerate a list of possible errors or error prone situations and then write test cases to expose those errors.
37
White Box TestingWhite box testing is concerned with the degree to which test cases exercise or cover the logic (source code) of the program.
White box Test case design techniquesStatement coverage Basis Path TestingDecision coverage Loop testingCondition coverage Decision-condition coverage
Multiple condition coverage Data flow testing
38
White Box Test-Case Design
Statement coverage write enough test cases to execute every statement at least once
TER (Test Effectiveness Ratio)TER1 = statements exercised / total statements
39
Example
void function eval (int A, int B, int X ){if ( A > 1) and ( B = 0 )
then X = X / A;if ( A = 2 ) or ( X > 1)
then X = X + 1;}Statement coverage test cases:1) A = 2, B = 0, X = 3 ( X can be assigned any value)
40
Decision coverage write test cases to exercise the true and false
outcomes of every decisionTER2 = branches exercised / total branches
Condition coverage write test cases such that each condition in a
decision takes on all possible outcomes atleast once may not always satisfy decision coverage
White Box Test-Case Design
41
Example
void function eval (int A, int B, int X ){if ( A > 1) and ( B = 0 ) then
X = X / A;if ( A = 2 ) or ( X > 1) then
X = X + 1;}
Decision coverage test cases:
A > 1andB = 0
A = 2or
X > 1X = X+1
X = X/ A
acT
Fb
eT
Fd
2) A = 2, B = 1, X = 1 (abe)1) A = 3, B = 0, X = 3 (acd)
42
Example
Condition coverage test cases must cover conditionsA>1, A<=1, B=0, B !=0A=2, A !=2, X >1, X<=1
Test cases:1) A = 1, B = 0, X = 3 (abe)2) A = 2, B = 1, X = 1 (abe)
does not satisfy decision coverage
A > 1andB = 0
A = 2or
X > 1X = X+1
X = X/ A
a
cT
Fb
eT
Fd
43
White Box Test-Case Design
Decision Condition coverage write test cases such that each condition in a decision takes
on all possible outcomes at least once and each decision takes on all possible outcomes at least once
Multiple Condition coverage write test cases to exercise all possible combinations of
True and False outcomes of conditions within a decision
44
Example
Decision Condition coverage test cases must cover conditionsA>1, A<=1, B=0, B !=0A=2, A !=2, X >1, X<=1
also ( A > 1 and B = 0) T, F( A = 2 or X > 1) T, F
Test cases:1) A = 2, B = 0, X = 4 (ace)2) A = 1, B = 1, X = 1 (abd)
A > 1andB = 0
A = 2or
X > 1X = X+1
X = X/ A
a
cT
Fb
eT
Fd
45
Example
Multiple Condition coverage must cover conditions1) A >1, B =0 5) A=2, X>12) A >1, B !=0 6) A=2, X <=13) A<=1, B=0 7) A!=2, X > 14) A <=1, B!=0 8) A !=2, X<=1
Test cases:1) A = 2, B = 0, X = 4 (covers 1,5)2) A = 2, B = 1, X = 1 (covers 2,6)3) A = 1, B = 0, X = 2 (covers 3,7)4) A = 1, B = 1, X = 1 (covers 4,8)
46
Basis Path Testing
1. Draw control flow graph of program from the program detailed design or code.
2. Compute the Cyclomatic complexity V(G) of the flow graph using any of the formulas:
V(G) = #Edges - #Nodes + 2or V(G) = #regions in flow graphor V(G) = #predicates + 1
47
Example
1
2
3
4
5 10
6
78
9
R4
R3
R211
12
13
R1
R6
R5
V(G) = 6 regions
V(G) = #Edges - #Nodes + 2 = 17 - 13 + 2 = 6V(G) = 5 predicate-nodes + 1 = 6
6 linearly independent paths
48
Basis Path Testing (contd)
3. Determine a basis set of linearly independent paths.
4. Prepare test cases that will force execution of each path in the Basis set.
The value of Cyclomatic complexity provides an upper bound on the number of tests that must be designed to guarantee coverage of all program statements.
49
Loop Testing
Aims to expose bugs in loopsFundamental Loop Test criteria
1) bypass the loop altogether2) one pass through the loop3) two passes through the loop before exiting4) A typical number of passes through the loop, unless
covered by some other test
51
Data Flow Testing
Select test paths of a program based on the Definition-Use (DU) chain of variables in the program.
Write test cases to cover every DU chain is at least once.
Testing in the Lifecycle
Software Testing Foundations
1 Principles 2 Lifecycle
4 Dynamic testtechniques
3 Static testing
5 Management 6 Tools
ContentsModels for testing, economics of testing
High level test planningComponent Testing
Integration testing in the smallSystem testing (non-functional and functional)
Integration testing in the largeAcceptance testing
Maintenance testing
Lifecycle
1 2 3
4 5 6
V-Model: test levels
Integration Testingin the Small
Integration Testingin the Large
SystemTesting
ComponentTesting
AcceptanceTesting
Code
DesignSpecification
SystemSpecification
ProjectSpecification
BusinessRequirements
TestsBusiness
RequirementsTests
ProjectSpecification
TestsSystem
SpecificationTests
DesignSpecification
TestsCode
V-Model: late test design
Integration Testingin the Small
Integration Testingin the Large
SystemTesting
ComponentTesting
AcceptanceTesting
DesignTests?
“We don’t havetime to design
tests early”
TestsTestsBusiness
RequirementsTestsTests
ProjectSpecification
TestsTestsSystem
SpecificationTestsTests
DesignSpecification
TestsTestsCode
V-Model: early test design
Integration Testingin the Small
Integration Testingin the Large
SystemTesting
ComponentTesting
AcceptanceTesting
RunTests
DesignTests
Early test designtest design finds faultsfaults found early are cheaper to fixmost significant faults found firstfaults prevented, not built inno additional effort, re-schedule test designchanging requirements caused by test design
Early test design helps to build quality,
stops fault multiplication
Experience report: Phase 1
Phase 1: Plan
2 mo 2 mo
dev test
test
150 faults
1st mo.
50 faults
usersnothappy
Quality
fraught, lots of dev overtime
Actual
"has to go in"but didn't work
Experience report: Phase 2
Source: Simon Barlow & Alan Veitch, Scottish Widows, Feb 96
Phase 2: Plan
2 mo 6 wks
dev test
test
50 faults
1st mo.
0 faultshappyusers!
Quality
smooth, not much for dev to do
Actual
acc test: fullweek (vs half day)
on time
Phase 1: Plan
2 mo 2 mo
dev test
test
150 faults
1st mo.
50 faults
usersnothappy
Quality
fraught, lots of dev overtime
Actual
"has to go in"but didn't work
Phase 2: Plan
2 mo 6 wks
dev test
test
50 faults
1st mo.
0 faultshappyusers!
Quality
smooth, not much for dev to do
Actual
acc test: fullweek (vs half day)
on time
VV&TVerification
the process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase [BS 7925-1]
Validation determination of the correctness of the products of
software development with respect to the user needs and requirements [BS 7925-1]
Testing the process of exercising software to verify that it
satisfies specified requirements and to detect faults
Verification, Validation and Testing
Verification
Validation
TestingAny
How would you test this spec?
A computer program plays chess with one user. It displays the board and the pieces on the screen. Moves are made by dragging pieces.
“Testing is expensive”Compared to what?What is the cost of NOT testing, or of faults
missed that should have been found in test? Cost to fix faults escalates the later the fault is
found Poor quality software costs more to use
users take more time to understand what to do users make more mistakes in using it morale suffers => lower productivity
Do you know what it costs your organisation?
What do software faults cost?
Have you ever accidentally destroyed a PC? knocked it off your desk? poured coffee into the hard disc drive? dropped it out of a 2nd storey window?
How would you feel?How much would it cost?
Hypothetical Cost - 1
(Loaded Salary cost: £50/hr)Fault Cost Developer User
£700 £50
- detect ( .5 hr) £25- report ( .5 hr) £25- receive & process (1 hr) £50- assign & bkgnd (4 hrs) £200- debug ( .5 hr) £25- test fault fix ( .5 hr) £25- regression test (8 hrs) £400
Hypothetical Cost - 2
Fault Cost Developer User£700 £50
- update doc'n, CM (2 hrs) £100- update code library (1 hr) £50- inform users (1 hr) £50- admin(10% = 2 hrs) £100Total (20 hrs) £1000
Hypothetical Cost - 3Fault Cost Developer User
£1000£50
(suppose affects only 5 users)- work x 2, 1 wk
£4000- fix data (1 day)
£350- pay for fix (3 days maint)
£750- regr test & sign off (2 days) £700- update doc'n / inform (1 day) £350- double check + 12% 5 wks
£5000
Cost of fixing faults
Req UseDes Test
1
10
1000
100
How expensive for you?
Do your own calculation calculate cost of testing
people’s time, machines, tools calculate cost to fix faults found in testing calculate cost to fix faults missed by testing
Estimate if no data available your figures will be the best your company has!
(10 minutes)
Contents
Lifecycle
1 2 3
4 5 6
Models for testing, economics of testingHigh level test planning
Component TestingIntegration testing in the small
System testing (non-functional and functional)Integration testing in the large
Acceptance testing Maintenance testing
(Before planning for a set of tests)
set organisational test strategyidentify people to be involved (sponsors,
testers, QA, development, support, et al.)examine the requirements or functional
specifications (test basis)set up the test organisation and infrastructuredefining test deliverables & reporting structure
See: Structured Testing, an introduction to TMap®, Pol & van Veenendaal, 199
High level test planning
What is the purpose of a high level test plan? Who does it communicate to? Why is it a good idea to have one?
What information should be in a high level test plan? What is your standard for contents of a test
plan? Have you ever forgotten something important? What is not included in a test plan?
Test Plan 1
1 Test Plan Identifier2 Introduction
software items and features to be tested references to project authorisation, project plan,
QA plan, CM plan, relevant policies & standards
3 Test items test items including version/revision level how transmitted (net, disc, CD, etc.) references to software documentation
Source: ANSI/IEEE Std 829-1998, Test Documentation
Test Plan 2
4 Features to be tested identify test design specification / techniques
5 Features not to be tested reasons for exclusion
Test Plan 36 Approach
activities, techniques and tools detailed enough to estimate specify degree of comprehensiveness (e.g.
coverage) and other completion criteria (e.g. faults)
identify constraints (environment, staff, deadlines)
7 Item Pass/Fail Criteria8 Suspension criteria and resumption criteria
for all or parts of testing activities which activities must be repeated on resumption
Test Plan 4
9 Test Deliverables Test plan Test design specification Test case specification Test procedure specification Test item transmittal reports Test logs Test incident reports Test summary reports
Test Plan 5
10 Testing tasks including inter-task dependencies & special
skills
11 Environment physical, hardware, software, tools mode of usage, security, office space
12 Responsibilities to manage, design, prepare, execute, witness,
check, resolve issues, providing environment, providing the software to test
Test Plan 6
13 Staffing and Training Needs14 Schedule
test milestones in project schedule item transmittal milestones additional test milestones (environment ready) what resources are needed when
15 Risks and Contingencies contingency plan for each identified risk
16 Approvals names and when approved
Component testing
lowest leveltested in isolationmost thorough look at detail
error handling interfaces
usually done by programmeralso known as unit, module, program testing
Component test strategy 1
specify test design techniques and rationale from Section 3 of the standard*
specify criteria for test completion and rationale from Section 4 of the standard
document the degree of independence for test design component author, another person, from
different section, from different organisation, non-human
*Source: BS 7925-2, Software Component Testing Standard
Component test strategy 2
component integration and environment isolation, top-down, bottom-up, or mixture hardware and software
document test process and activities including inputs and outputs of each activity
affected activities are repeated after any fault fixes or changes
project component test plan dependencies between component tests
Component Test Document Hierarchy
ComponentTest Strategy
ProjectComponentTest Plan
ComponentTest
Specification
ComponentTest Plan
Component
Test Report
Source: BS 7925-2, Software Component Testing Standard, Annex A
Component test process
Checking forComponent
Test Completion
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
BEGIN
END
Component test process
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
Checking forComponent
Test Completion
BEGIN
END
Component test planning- how the test strategy and project test plan apply to the component under test- any exceptions to the strategy- all software the component will interact with (e.g. stubs and drivers
Component test process
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
Checking forComponent
Test Completion
BEGIN
END
Component test specification- test cases are designed using the test case design techniques specified in the test plan (Section 3)- Test case: objective initial state of component input expected outcome- test cases should be repeatable
Component test process
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
Checking forComponent
Test Completion
BEGIN
END
Component test execution- each test case is executed- standard does not specify whether executed manually or using a test execution tool
Component test process
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
Checking forComponent
Test Completion
BEGIN
END
Component test recording- identities & versions of component, test specification- actual outcome recorded & compared to expected outcome- discrepancies logged- repeat test activities to establish removal of the discrepancy (fault in test or verify fix)- record coverage levels achieved for test completion criteria specified in test plan
Sufficient to show test activities carried out
Component test process
ComponentTest Planning
ComponentTest Specification
ComponentTest Execution
ComponentTest Recording
Checking forComponent
Test Completion
BEGIN
END
Checking for component test completion- check test records against specified test completion criteria- if not met, repeat test activities- may need to repeat test specification to design test cases to meet completion criteria (e.g. white box)
Test design techniques“Black box”
Equivalence partitioning Boundary value
analysis State transition testing Cause-effect graphing Syntax testing Random testing
How to specify other techniques
“White box” Statement testing Branch / Decision
testing Data flow testing Branch condition testing Branch condition
combination testing Modified condition
decision testing LCSAJ testing
= Yes= No
Also a measurementtechnique?
Integration testingin the small
more than one (tested) componentcommunication between componentswhat the set can perform that is not possible
individuallynon-functional aspects if possibleintegration strategy: big-bang vs incremental
(top-down, bottom-up, functional)done by designers, analysts, or
independent testers
Big-Bang Integration
In theory: if we have already tested components why not
just combine them all at once? Wouldn’t this save time?
(based on false assumption of no faults)
In practice: takes longer to locate and fix faults re-testing after fixes more extensive end result? takes more time
Incremental Integration
Baseline 0: tested componentBaseline 1: two componentsBaseline 2: three components, etc.Advantages:
easier fault location and fix easier recovery from disaster / problems interfaces should have been tested in
component tests, but .. add to tested baseline
Baselines: baseline 0: component a baseline 1: a + b baseline 2: a + b + c baseline 3: a + b + c + d etc.
Need to call to lowerlevel components notyet integrated
Stubs: simulate missingcomponents
Top-Down Integration
a
b c
d e f g
h i j k l m
n o
a
b c
d e f g
h i j
Stubs
Stub replaces a called component for integration testing
Keep it Simple print/display name (I have been called) reply to calling module (single value) computed reply (variety of values) prompt for reply from tester search list of replies provide timing delay
Pros & cons of top-down approach
Advantages: critical control structure tested first and most
often can demonstrate system early (show working
menus)
Disadvantages: needs stubs detail left until last may be difficult to "see" detailed output (but
should have been tested in component test) may look more finished than it is
a
b c
e f g
k l m
d
i
n o
h j
Baselines: baseline 0: component n baseline 1: n + i baseline 2: n + i + o baseline 3: n + i + o + d etc.
Needs drivers to call the baseline configuration
Also needs stubs for some baselines
Bottom-up Integration
b
d
i
n o
h j
Drivers
Driver: test harness: scaffoldingspecially written or general purpose
(commercial tools) invoke baseline send any data baseline expects receive any data baseline produces (print)
each baseline has different requirements from the test driving software
Pros & cons of bottom-up approach
Advantages: lowest levels tested first and most thoroughly (but
should have been tested in unit testing) good for testing interfaces to external environment
(hardware, network) visibility of detail
Disadvantages no working system until last baseline needs both drivers and stubs major control problems found last
Baselines: baseline 0: component a baseline 1: a + b baseline 2: a + b + d baseline 3: a + b + d + i etc.
Needs stubsShouldn't need drivers
(if top-down)
Minimum Capability Integration(also called Functional)
f g
k l m
a
b
d
i
c
e
n o
h j
a
b
d
i
c
e
n o
h j
Pros & cons of Minimum Capability
Advantages: control level tested first and most often visibility of detail real working partial system earliest
Disadvantages needs stubs
k l mih j
b c
a
f gd e
n o
Thread Integration(also called functional)
order of processing some eventdetermines integration order
interrupt, user transactionminimum capability in timeadvantages:
critical processing first early warning of
performance problems
disadvantages: may need complex drivers and stubs
b c
k l mih j
f gd e
Integration Guidelines
minimise support software neededintegrate each component only onceeach baseline should produce an easily
verifiable resultintegrate small numbers of components at
once one at a time for critical or fault-prone
components combine simple related components
Integration Planning
integration should be planned in the architectural design phase
the integration order then determines the build order components completed in time for their baseline component development and integration testing
can be done in parallel - saves time
System testing
last integration stepfunctional
functional requirements and requirements-based testing
business process-based testing
non-functional as important as functional requirements often poorly specified must be tested
often done by independent test group
Functional system testing
Functional requirements a requirement that specifies a function that a
system or system component must perform (ANSI/IEEE Std 729-1983, Software Engineering Terminology)
Functional specification the document that describes in detail the
characteristics of the product with regard to its intended capability (BS 4778 Part 2, BS 7925-1)
Requirements-based testing
Uses specification of requirements as the basis for identifying tests table of contents of the requirements spec
provides an initial test inventory of test conditions
for each section / paragraph / topic / functional area, risk analysis to identify most important / critical decide how deeply to test each functional area
Business process-based testing
Expected user profiles what will be used most often? what is critical to the business?
Business scenarios typical business transactions (birth to death)
Use cases prepared cases based on real situations
Non-functional system testing
different types of non-functional system tests: usability - configuration /
installation security - reliability / qualities documentation - back-up / recovery storage - performance, load, stress volume
Performance Tests
Timing Tests response and service times database back-up times
Capacity & Volume Tests maximum amount or processing rate number of records on the system graceful degradation
Endurance Tests (24-hr operation?) robustness of the system memory allocation
Multi-User TestsConcurrency Tests
small numbers, large benefits detect record locking problems
Load Tests the measurement of system behaviour under
realistic multi-user load
Stress Tests go beyond limits for the system - know what will
happen particular relevance for e-commerce
Source: Sue Atkins, Magic Performance Management
Who should design / perform these tests?
Usability Testsmessages tailored and meaningful to (real)
users?coherent and consistent interface?sufficient redundancy of critical information?within the "human envelope"? (7±2 choices)feedback (wait messages)?clear mappings (how to escape)?
Security Tests
passwordsencryptionhardware permission deviceslevels of access to informationauthorisationcovert channelsphysical security
Configuration and Installation
Configuration Tests different hardware or software environment configuration of the system itself upgrade paths - may conflict
Installation Tests distribution (CD, network, etc.) and timings physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies uninstall (removing installation)
Reliability / Qualities
Reliability "system will be reliable" - how to test this? "2 failures per year over ten years" Mean Time Between Failures (MTBF) reliability growth models
Other Qualities maintainability, portability, adaptability, etc.
Back-up and Recovery
Back-ups computer functions manual procedures (where are tapes stored)
Recovery real test of back-up manual procedures unfamiliar should be regularly rehearsed documentation should be detailed, clear and
thorough
Documentation Testing
Documentation review check for accuracy against other documents gain consensus about content documentation exists, in right format
Documentation tests is it usable? does it work? user manual maintenance documentation
Integration testing in the large
Tests the completed system working in conjunction with other systems, e.g. LAN / WAN, communications middleware other internal systems (billing, stock, personnel,
overnight batch, branch offices, other countries) external systems (stock exchange, news,
suppliers) intranet, internet / www 3rd party packages electronic data interchange (EDI)
Approach
Identify risks which areas missing or malfunctioning would be
most critical - test them first
“Divide and conquer” test the outside first (at the interface to your
system, e.g. test a package on its own) test the connections one at a time first
(your system and one other) combine incrementally - safer than “big bang”
(non-incremental)
Planning considerations
resources identify the resources that will be needed
(e.g. networks)
co-operation plan co-operation with other organisations
(e.g. suppliers, technical support team)
development plan integration (in the large) test plan could
influence development plan (e.g. conversion software needed early on to exchange data formats)
User acceptance testing
Final stage of validation customer (user) should perform or be closely
involved customer can perform any test they wish,
usually based on their business processes final user sign-off
Approach mixture of scripted and unscripted testing ‘Model Office’ concept sometimes used
Why customer / user involvement
Users know: what really happens in business situations complexity of business relationships how users would do their work using the system variants to standard tasks (e.g. country-specific) examples of real cases how to identify sensible work-arounds
Benefit: detailed understanding of the new system
User Acceptance testing
20% of functionby 80% of code
80% of functionby 20% of code
System testingdistributed over
this line
Acceptance testingdistributed over
this line
Contract acceptance testing
Contract to supply a software system agreed at contract definition stage acceptance criteria defined and agreed may not have kept up to date with changes
Contract acceptance testing is against the contract and any documented agreed changes not what the users wish they had asked for! this system, not wish system
Alpha and Beta tests: similarities
Testing by [potential] customers or representatives of your market not suitable for bespoke software
When software is stableUse the product in a realistic way in its
operational environmentGive comments back on the product
faults found how the product meets their expectations improvement / enhancement suggestions?
Alpha and Beta tests: differences
Alpha testing simulated or actual operational testing at an in-
house site not otherwise involved with the software developers (i.e. developers’ site)
Beta testing operational testing at a site not otherwise
involved with the software developers (i.e. testers’ site, their own location)
Acceptance testing motto
If you don't have patience to test the system
the system will surely test your patience
Maintenance testing
Testing to preserve quality: different sequence
development testing executed bottom-up maintenance testing executed top-down different test data (live profile)
breadth tests to establish overall confidence depth tests to investigate changes and critical
areas predominantly regression testing
What to test in maintenance testing
Test any new or changed codeImpact analysis
what could this change have an impact on? how important is a fault in the impacted area? test what has been affected, but how much?
most important affected areas? areas most likely to be affected? whole system?
The answer: “It depends”
Poor or missing specifications
Consider what the system should do talk with users
Document your assumptions ensure other people have the opportunity to
review them
Improve the current situation document what you do know and find out
Track cost of working with poor specifications to make business case for better specifications
What should the system do?
Alternatives the way the system works now must be right
(except for the specific change) - use existing system as the baseline for regression tests
look in user manuals or guides (if they exist) ask the experts - the current users
Without a specification, you cannot really test, only explore. You can validate, but not verify.
Kegiatan Implementasi
- Pemilihan dan pelatihan personil- Pemilihan tempat dan instalasi perangkat
keras dan perangkat lunak- Pemrograman dan pengetesan program- Pengetesan Sistem- Konversi Sistem
Pemilihan dan Pelatihan Personil
Bisa berasal dari dua sumber :- Karyawan yang telah ada di perusahaan- Calon karyawan dari luar perusahaan
Personil yang terlibat :- Tugas input output data- Tugas operasi- Tugas pemrograman- Tugas analisis sistem
Pemilihan dan Pelatihan PersonilPelatihan Karyawan- kegiatan mempelajari kepandaian-kepandaianatau pengetahuan baru yang mungkin belummengerti atau belum menguasai sebelumnya.
- Personil dalam tahap ini adalah Pengguna sistem
-Penekanan pada bagaimana kerja dari sistem danapa saja yang dapat diperoleh dari sistem- Penekanan pelatihan adalah bagaimana caramengoperasikan sistem
Pelatihan Karyawan
Pendekatan untuk Pelatihan :- Ceramah/seminar- Pelatihan prosedural- Pelatihan tutorial- Simulasi- Latihan langsung di pekerjaan
Pemilihan tempat dan instalasi perangkatkeras dan perangkat lunak
- Merupakan kegiatan merancang jadwalpelaksanaan pelatihan dan pendidikan
- Jadwal tsb meliputi- Siapa yang akan dilatih- Materi pelatihan- Tanggal pelaksanaan- Pendekatan yang akan digunakan- Siapa saja instrukturnya
Pemrograman dan Pengetesan Program
- Pemrograman terstruktur- Merupakan suatu tindakan untuk mengorganisasikan
dan membuat kode program agar kode mudahdimengerti dan dimodifikasi
- Pengetesan Program- Program ditest untuk tiap modul dan dilanjutkan
dengan pengetesan untuk semua modul yang telahdirangkai
Pemrograman dan Pengetesan Program
- Pengetesan ProgramKesalahan yang mungkin terjadi saat pembuatanprogram :- Kesalahan bahasa (language errors/grammatical
errors/syntax errors)- Kesalahan sewaktu proses (runtime errors)- Kesalahan logika (logical errors)
Pengetesan Sistem
Biasanya dilakukan setelah pengetesan program
Dilakukan untuk memeriksa kekompakan antarkomponen sistem yang diimplementasi
Tujuan utamanya adalah memastikan bahwaelemen atau komponen dari sistem telahberfungsi sesuai dengan yang diharapkan
Konversi Sistem
Jenis konversi sistem- Konversi langsung
Dilakukan dengan mengganti sistem lama denganlangsung ke sistem yang baru
- Konversi paralelDilakukan dengan mengoperasikan sistem yang barubersama-sama dengan sistem yang lama selama suatuperiode tertentu
Konversi Sistem
- Konversi percontohanDilakukan bila beberapa sistem yang sejenis akanditerapkan pada beberapa area terpisah
- Konversi bertahapDilakukan dengan menerapkan masing-masing modulsistem yang berbeda secara urut
Konversi Sistem
Tahap konversi sistem :1. Konversi dokumen dasar2. Konversi file
1. Konversi dari file komputer lama ke komputer baru2. Konversi dari data di catatan manual ke file komputer
3. Mengoperasikan sistem