Syllabus: Unit-5
The Golden Rules, User Interface Analysis and Design, Interface Analysis, Interface Design Steps, Web App Interface Design, Design Evaluation, Elements of Software Quality Assurance, SQA Tasks, Goals & Metrics, Statistical SQA, Software Reliability, A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object-Oriented Software, Test Strategies for Web Apps, Validation Testing, System Testing, The Art of Debugging, Software Testing Fundamentals, Internal and External Views of Testing, White-Box Testing ,Basis Path Testing
Text Book-I: Chapters 11, 16, 17, 18.
Topics:
5.1 The Golden Rules
5.2 User Interface Analysis and Design
5.3 Interface Analysis
5.4 Interface Design Steps
5.5 Web App Interface Design
5.6 Design Evaluation
5.7 Elements of Software Quality Assurance
5.8 SQA Tasks, Goals & Metrics
5.9 Statistical SQA,
5.10 Software Reliability
5.11 A Strategic Approach to Software Testing
5.12 Strategic Issues
5.13 Test Strategies for Conventional Software
5.14 Test Strategies for Object-Oriented Software
5.15 Test Strategies for Web Apps
5.16 Validation Testing
5.17 System Testing
5.18 The Art of Debugging
5.19 Software Testing Fundamentals
5.20 Internal and External Views of Testing
5.21 White-Box Testing
5.22 Basis Path Testing
5.1 The Golden Rules
1. Place the user in control.
2. Reduce the user’s memory load.
3. Make the interface consistent.
Place the user in control
· Do not force unnecessary actions
· Flexible interaction
· Interaction must be interruptible
· Customization
· Skill levels from basic to advance
· Hide technical internals from casual user
· Direct interaction of objects that appear on screen
Reduce the user’s memory load
· Reduce demand on short-term memory
· meaningful defaults
· Define shortcuts for intuitive
· Visual interface to be on a real-world effect
· Disclose information in a progressive fashion.
Make the interface consistent
· Current task into a meaningful context
· Maintain consistency across a family of applications.
· Past interactive model met user expectations? If yes, then do not modify unless compiled.
5.2 User Interface Analysis and Design
Models
· Human engineer creates user model
· s/w engineer creates Design model
· End user develops mental image model
· Implementers of the system create the Implementation model.
Types of users
· Novice: No idea/ No knowledge of the system
· Intermittent users: reasonable syntactic knowledge, but low recall
· Good syntactic knowledge , shortcuts.
Process
Spiral model:
· Focus on profile of users
· Define set of interface objects and actions
· Prototype creating usage scenarios
· Interface Validation:
o User task implementation correctly
o Degree to which interface is easy to use
o User’s acceptance level
5.3 Interface Analysis
It consists of following FOUR activities:
1.User Analysis
2.Task Analysis and Modeling
3.Analysis of Display Content
4.Analysis of the Work Environment.
User Analysis
* User Interviews: one-to-one meeting, GDs, etc.,
* Sales input: Sales people meet with users on a regular basis and can gather information that will help the software team to categorize users and better understand their requirements.
* Marketing input: Market analysis can be invaluable in the definition of market segments and an understanding of how each segment might use the software in subtly different ways.
Ex: MS Office in School, College and DTP-Centre have different complexities in usage.
Task Analysis and Modeling
· User tasks?
· Tasks and Sub-Tasks?
· Sequence of tasks?
· Hierarchy of tasks?
· Problem Domain?
Analysis of Display Content
· Type of content like photos, videos, graphics, text, ….
· User customization
· Partitioning of large report like content
· Error and Warning messages
· Scaling of graphical data
Analysis of the Work Environment
· Work environment ?
o well lit?
o display placement? Cockpit reachable?
o Keyboard available?
o Mouse available?
· Work culture
o Time per transaction
o Accuracy
· Information sharing
o Two or more people accessing same file
5.4 Interface Design Steps
1. Applying Interface Design Steps: Target, Source and Application objects are identified.
2. User Interface Design Patterns: design problem solution to be specific & well bounded
3. Design Issues = Response time, Help Facilities, Error handling, Command labeling
4. Response time: response time too long = increase stress level of user => Frustration
5. Help facilities: Help Command with one/two line suggestions
6. Error handling: Error messages, warning messages, misleading with more severity increase user frustration
7. Menu and command labelling: proper menu and submenu generation
8. Application accessibility: visual, hearing, mobility, speech, and learning impairments
9. Internationalization: Guidelines provided for representing symbols by using Unicode standard
5.5 Web App Interface Design
Provide user:
* Where am I? The interface should provide the indication of the “content in the hierarchy”.
* What can I do now? Functions available
* Where have I been, where am I going? in map like guidance for easy understanding of WebApp content organization.
Principles and Guidelines:
1.Anticipation: anticipates the user’s next move.
2. Communication: interface should communicate status of any activity initiated by the user.
3. Consistency: The use of navigation controls, menus, icons, and aesthetics consistent
4. Controlled autonomy: facilitate user movement throughout the WebApp, but not force
5. Efficiency: interface should optimize user’s work efficiency, not the developers view
6. Flexibility: flexible to enable users to accomplish tasks directly / explore WebApp.
7. Focus: focused on the user task(s) at hand.
8. Fitt’s law: The time to acquire a target is a function of the distance to and size of the target
9. Human interface objects: vast library of reusable human interface objects were developed
10. Latency reduction: multitasking in a way that lets the user proceed with work
11. Learnability: A WebApp interface should be designed to minimize learning time
12. Metaphors: interaction metaphor is easier to learn, appropriate for application and user.
13.Maintain work product integrity: A work product must be automatically saved
14.Readability: information presented should be readable by young and old.
15.Track state: the state of the user interaction should be tracked, stored and retrieved (like Sleep, and Resume service in Computer)
16.Visible navigation: A well-designed WebApp interface provides “the illusion that users are in the same place, with the work brought to them”.
5.6 Design Evaluation
User Interface evaluation cycle takes the form shown below:
Fig: Design Evaluation cycle
* The prototype is evaluated by the user,11 who provides you with direct comments about the efficacy of the interface.
* Ex: if formal evaluation techniques are used (e.g., questionnaires, rating sheets), you can extract information from these data (e.g., 80 percent of all users did not like the mechanism)
* Number of evaluation criteria can be applied during early design reviews:
# The length and complexity of the requirements model
# The number of user tasks specified and the average number of actions per task
# memory load on users of the system = number of actions, tasks, and system states indicated by the design model
# Interface style, help facilities, and error handling protocol
* To collect qualitative data, questionnaires can be distributed to users of the prototype. Questions can be:
(1) simple yes/no response,
(2) numeric response,
(3) scaled (subjective) response,
(4) Likert scales (e.g., strongly agree, somewhat agree),
(5) percentage (subjective) response, or
(6) open-ended.
5.7 Elements of Software Quality Assurance
Standards: The IEEE, ISO, and other standards organizations for S/W Engg Standards
Reviews and audits: Technical reviews are a quality control activity performed by software engineers for software engineers
Testing: one primary goal = to find errors
Error/defect collection and analysis: SQA collects and analyzes error and defect data to better understand how errors are introduced
Change management: If it is not properly managed, change can lead to confusion, and confusion almost always leads to poor quality.
Education: key contributor to improvement is education of soft ware engineers, their managers, and other stakeholders
Vendor management: Three categories of software are acquired from external software vendors.
· shrink-wrapped packages (e.g., Microsoft Office)
· a tailored shell that provides a basic skeletal structure that is custom tailored to the needs of a purchaser
· contracted software that is custom designed and constructed from specifications provided by the customer organization.
Security management: new government regulations regarding privacy, every software organization should institute policies that protect data at all levels, establish firewall protection for WebApps, and ensure that software has not been tampered with internally
Safety: impact of hidden defects can be catastrophic
Risk management: SQA ensures that risk management activities are properly conducted and that risk-related contingency plans have been established
5.8 SQA Tasks, Goals & Metrics
SQA Tasks
Prepares an SQA plan for a project.
Participates in the development of the project’s software process description.
Reviews software engineering activities to verify compliance with the defined software process.
Audits designated software work products to verify compliance with those defined as part of the software process.
Audits designated software work products to verify compliance with those defined as part of the software process.
Records any noncompliance and reports to senior management.
Goals & Metrics
Requirements quality: correctness, completeness, and consistency of the requirements model
Design quality: Every element of the design model should be assessed by the software team to ensure that it exhibits high quality
Code quality: Source code and related work products (e.g., other descrip tive information) must conform to local coding standards and exhibit charac teristics that will facilitate maintainability.
Quality control effectiveness: apply limited resources in a way that has the highest likelihood of achieving a high-quality result
5.9 Statistical SQA
* Collect and categorize the information about software errors and defects
* An attempt is made to trace each error and defect to its underlying cause
* Using the Pareto principle (80 percent of the defects can be traced to 20 per cent of all possible causes), isolate the 20 percent
* Once the vital few causes have been identified, move to correct the problems that have caused the errors and defects
Generic Example
• Incomplete or erroneous specifications (IES)
• Misinterpretation of customer communication (MCC)
• Intentional deviation from specifications (IDS)
•Violation of programming standards (VPS) •Error in data representation (EDR)
• Inconsistent component interface (ICI) •Error in design logic (EDL)
• Incomplete or erroneous testing (IET)
• Inaccurate or incomplete documentation (IID)
•Error in programming language translation of design (PLT)
•Ambiguous or inconsistent human/computer interface (HCI)
•Miscellaneous (MIS)
Six Sigma for Software Engineering
The Six Sigma method ology defines three core steps:
• Define customer requirements and deliverables and project goals via well defined methods of customer communication.
• Measure the existing process and its output to determine current quality performance (collect defect metrics).
• Analyze defect metrics and determine the vital few causes.
Two additional steps also suggested:
• Improve the process by eliminating the root causes of defects.
• Control the process to ensure that future work does not reintroduce the causes of defects.
5.10 Software Reliability
* Measures of Reliability and Availability:
# A simple measure of reliability is mean time-between-failure (MTBF):
MTBF= MTTF + MTTR
where the acronyms MTTF and MTTR are mean-time-to-failure and mean-time-to repair
# Software availability is the probability that a program is operating according to requirements at a given point in time and is defined as
* Software Safety:
# Software safety = software quality assurance activity = focuses on the identification and assessment of potential hazards that may affect software negatively and cause an entire system to fail
# Ex: computer-based cruise control for an automobile might be:
(1) causes uncontrolled acceleration that cannot be stopped
(2) does not respond to depression of brake pedal (by turning off)
(3) does not engage when switch is activated
(4) slowly loses or gains speed
5.11 A Strategic Approach to Software Testing
Verification and Validation
Verification refers to the set of tasks that ensure that software correctly implements a specific function.
Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements.
i.e.,
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Organizing for Software Testing
* The software developer is always responsible for testing the individual units (components) of the program, ensuring that each performs the function or exhibits the behavior
*misconceptions:
# that the developer of software should do no testing at all
# that the software should be “tossed over the wall” to strangers who will test it mercilessly
# that testers get involved with the project only when the testing steps are about to begin
Software Testing Strategy—The Big Picture
Fig: Testing Strategy
Fig: Software Testing Steps
i.e.,
Coding -->Unit Testing --> Integration Testing --> Validation Testing --> System Testing -->…..
Criteria for Completion of Testing
One response to the question is: “You’re never done testing; the burden simply shifts from you (the software engineer) to the end user.”
OR
“You’re done testing when you run out of time or you run out of money.”
But,
“a series of tests derived from a statistical sample of all possible program executions by all users from a targeted population.”
5.12 Strategic Issues
* Specify product requirements in a quantifiable manner long before testing commences: a good testing strategy also assesses other quality characteristics such as portability, maintain ability, and usability
* State testing objectives explicitly: testing should be stated in measurable terms
* Understand the users of the software and develop a profile for each user category: Use cases can reduce overall testing effort by focusing testing on actual use of the product
* Develop a testing plan that emphasizes “rapid cycle testing”: to control quality levels and the corresponding test strategies
* Build “robust” software that is designed to test itself: capable of diagnosing certain classes of errors, automated testing and regression testing.
* Use effective technical reviews as a filter prior to testing: reviews can reduce the amount of testing effort
* Conduct technical reviews to assess the test strategy and test cases themselves: Technical reviews can uncover inconsistencies, omissions, and outright errors, saves time.
* Develop a continuous improvement approach for testing process: metrics collected during testing should be used as part of statistical process
5.13 Test Strategies for Conventional Software
Unit Testing:
verification effort on the smallest unit of software design
Unit-test considerations: Data flow across a component, Selective testing, Boundary testing, antibugging
Unit-test procedures
Integration Testing
Top-down integration: verifies major control or decision points early in the test process
Bottom-up integration: Low-level components are combined into clusters (sometimes called builds)
Regression testing: Testing --> Errors found --> Code changes --> New Paths created --> new I/O occur
Smoke testing: It is integration testing approach, when product software is developed, frequent basis.
Activities of Smoke Testing:
# Software components à translated into code integrated into a build. Build = all data files, libraries, reusable modules, components
# A series of tests is designed to expose errors
# The build is integrated with other builds, is smoke tested daily.
{Advantages of Smoke Testing} = Integration risk is minimized. The quality of the end product is improved, Error diagnosis and correction are simplified, Progress is easier to assess.
Strategic options:
* Sandwich testing = top-down tests(upper levels of the program structure)+ bottom-up tests (subordinate levels)
* Critical module to be identified
* Critical module = several s/w requirements, high level control, complex/error prone, performance requirements.
Integration test work products
* An overall plan for integration of the software and a description of specific tests is documented in a Test Specification.
Example: SafeHomeSecurity System will have the following Test Phases
@ User interaction = command input and output, display, error processing
@ Sensor processing= acquisition of sensor output, resolve sensor conditions
@ Communications functions = to communicate with central monitoring station
@ Alarm processing = tests of software actions when an alarm is encountered
At all phases these are applied: Interface integrity, Functional validity, Information content, Performance
5.14 Test Strategies for Object-Oriented Software
* Unit Testing in the OO Context:
# You can no longer test a single operation in isolation but rather as part of a class
# conventional software testing = focus on the algorithmic detail of a module and the data that flow across the module interface
# class testing for OO software = {operations encapsulated by the class} +{state behavior of the class}
* Integration Testing in the OO Context: There are two different strategies for integration testing of OO systems.
# thread-based testing: integrates the set of classes required to respond to one input or event for the system. Each thread is integrated and tested individually
# use-based testing: begins the construction of the system by testing those classes (called independent classes) that use very few (if any) server classes. After the independent classes are tested, the next layer of classes, called dependent classes, that use the independent classes are tested.
# Cluster testing is one step in the integration testing of OO software. Here, a cluster of collaborating classes (determined by examining the CRC and object-relationship model) is exercised by designing test cases that attempt to uncover errors in the collaborations.
5.15 Test Strategies for Web Apps
The strategy for WebApp testing adopts the basic principles for all software testing and applies a strategy and tactics that are used for object-oriented systems. The following steps summarize the approach:
1. content model reviewed to uncover errors.
2. interface model is reviewed to ensure that all use cases can be accommodated.
3. design model is reviewed to uncover navigation errors.
4. The user interface is tested to uncover errors in presentation and/or navigation mechanics.
5. Each functional component is unit tested.
6. Navigation throughout the architecture is tested.
7. The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration.
8. Security tests are conducted in an attempt to exploit vulnerabilities in the WebApp or within its environment.
9. Performance tests are conducted.
10. The WebApp is tested by a controlled and monitored population of end users.
The results of their interaction with the system are evaluated for content and navigation errors, usability concerns, compatibility concerns, and WebApp reliability and performance. Because many WebApps evolve continuously, the testing process is an ongoing activity, conducted by support staff who use regression tests derived from the tests developed when the WebApp was first engineered.
5.16 Validation Testing
* Validation-Test Criteria:
# Software validation is achieved through a series of tests that demonstrate conformity with requirements
# After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristic conforms to specification and is accepted or
(2) a deviation from specification is uncovered and a deficiency list is created
* Configuration Review:
Ensure that all elements of the software configuration have been properly developed, are cataloged, and have the necessary detail to bolster the support activities
* Alpha and Beta Testing
# The alpha test is conducted at the developer’s site by a representative group of end users.
# The beta test is conducted at one or more end-user sites. The developer generally is not present. Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled by the developer.
# A variation on beta testing, called customer acceptance testing, is sometimes per formed when custom software is delivered to a customer under contract
5.17 System Testing
Recovery Testing: It is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic (performed by the system itself), reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness
Security testing: It attempts to verify that protection mechanisms built into a system will, in fact, protect it from improper penetration
Stress Testing: Resources in abnormal quantity, frequency, or volume. Test cases that
(1) generate ten interrupts per second, when one or two is the average rate
(2) input data rates may be increased by an order of magnitude to determine how input functions will respond
(3) maximum memory or other resources are executed
(4) thrashing in a virtual operating system are designed,
(5) Excessive hunting for disk-resident data are created.
PerformanceTesting: Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation
Deployment Testing: sometimes called configuration testing, exercises the software in each environment in which it is to operate
5.18 The Art of Debugging
The Debugging Process: Debugging is not testing but often occurs as a consequence of testing. The debugging process will usually have one of two outcomes:
(1) the cause will be found and corrected or
(2) the cause will not be found.
In the latter case, the per son performing debugging may suspect a cause, design a test case to help validate that suspicion, and work toward error correction in an iterative fashion
Few characteristics of bugs provide some clues:
1. The symptom and the cause may be geographically remote. That is, the symptom may appear in one part of a program, while the
cause may actually be located at a site that is far removed. Highly coupled components exacerbate this situation.
2. The symptom may disappear (temporarily) when another error is corrected.
3. The symptom may actually be caused by nonerrors. (example: round-off inaccuracies).
4. The symptom may be caused by human error that is not easily traced
5. The symptom may be a result of timing problems, rather than processing problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time application in which input ordering is indeterminate).
7. The symptom may be intermittent. This is particularly common in embedded systems that couple hardware and software inextricably.
8. The symptom may be due to causes that are distributed across a number of tasks running on different processors.
Psychological Considerations: evidence on debugging is open to many interpretations, large variances in debugging
Debugging Strategies
Debugging tactics:
# Brute force category of debugging is probably the most common and least efficient method for isolating the cause of a software error.
# Backtracking is a fairly common debugging approach that can be used success fully in small programs.
# Cause elimination is manifested by induction or deduction and introduces the concept of binary partitioning
Automated debugging: Integrated development environments (IDEs) provide a way to capture some of the language specific predetermined errors (e.g., missing end-of-statement characters, undefined variables, and so on) without requiring compilation
The people factor: A final maxim for debugging might be: “When all else fails, get help!”
Correcting the Error
“correction” that removes the cause of a bug. After correction, questions to be posted are:
# Is the cause of the bug reproduced in another part of the program?
# What “next bug” might be introduced by the fix I’m about to make?
5.19 Software Testing Fundamentals
* Testability:
how a program can be tested is called Testability.
# Operability: quality in mind while programming => few bugs that block execution of tests => testing progresses
# Observability: Source code is accessible. Input/output variables observable.
# Controllability: control the I/O, inputs, outputs
# Decomposability: independent modules that can be tested independently
# Simplicity: functional simplicity (e.g., the feature set is the minimum necessary to meet requirements. structural simplicity (e.g., architecture is modularized to limit the propagation of faults) and code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
# Stability: The software recovers well from failures
# Understandability: dependencies between internal, external, and shared components are well understood. Technical documentation is instantly accessible, well organized, specific and detailed, and accurate.
* Test Characteristics.
# A good test has a high probability of finding an error.
#A good test is not redundant
# A good test should be “best of breed”
# A good test should be neither too simple nor too complex
5.20 Internal and External Views of Testing
* Engineered product (and most other things) can be tested in one of two ways:
# Knowing the specified function Ex: Black-box testing
# Knowing the internal working Ex: White-box testing
5.21 White-Box Testing
* White-box testing = glass-box testing
* component-level design to derive test cases
* derive test cases that
# guarantee that all independent paths within a module have been exercised at least once
# exercise all logical decisions on their true and false sides
# execute all loops at their boundaries and within their operational bounds
# exercise internal data structures to ensure their validity.
5.22 Basis Path Testing
* Flow Graph Notation
* Independent Program Paths
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11
# Cyclomatic complexity is a software metric that pro vides a quantitative measure of the logical complexity of a program. Complexity is computed in one of three ways:
@ The number of regions of the flow graph corresponds to the cyclomatic complexity.
@ Cyclomatic complexity V(G) for a flow graph G is defined as V(G) =E -N +2 where E is the number of flow graph edges and N is the number of flow graph nodes.
@ Cyclomatic complexity V(G) for a flow graph G is also defined as V(G) =P +1 where P is the number of predicate nodes contained in the flow graph G
* Deriving Test Cases
# Using the design or code as a foundation, draw a corresponding flow graph.
# Determine the cyclomatic complexity of the resultant flow graph
# Determine a basis set of linearly independent paths
# Prepare test cases that will force execution of each path in the basis set.
* Graph Matrices
# A data structure, called a graph matrix, can be quite useful for developing a software tool that assists in basis path testing.
# A graph matrix = square matrix whose size (i.e., number of rows =columns) is equal to the number of nodes on the flow graph. Each row and column correspond to an identified node, and matrix entries correspond to connections (an edge) between nodes.
# Consider below example: