Posts

Showing posts with the label Testing Content

UNIX BASICS - Beginners guide for Unix commands

For more software testing articles visit: http://www.softwaretestinghelp.com Main features of unix : Multi user - More than one user can use the machine Multitasking - More than one program can be run at a time. Portability – This means the operating system can be easily converted to run on different browsers. Commands ls when invoked without any arguments, lists the files in the current working directory. A directory that is not the current working directory can be specified and ls will list the files there. The user also may specify any list of files and directories. In this case, all files and all contents of specified directories will be listed. Files whose names start with "." are not listed, unless the -a flag is specified or the files are specified explicitly. Without options, ls displays files in a bare format. This bare format however makes it difficult to establish the type, permissions, and size of the files. The most common options to reveal

web testing checklist.

1) Functionality Testing 2) Usability testing 3) Interface testing 4) Compatibility testing 5) Performance testing 6) Security testing 1) Functionality Testing: Test for - all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing. Check all the links: * Test the outgoing links from all the pages from specific domain under test. * Test all internal links. * Test links jumping on the same pages. * Test links used to send the email to admin or other users from web pages. * Test to check if there are any orphan pages. * Lastly in link checking, check for broken links in all above-mentioned links. Test forms in all pages: Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms? * First check all the validations on each field. * Check for t

Top Ten Challenges of Software Test Automation

Buying the Wrong Tool Inadequate Test Team Organization Lack of Management Support Incomplete Coverage of Test Types by the selected tool Inadequate Tool Training Difficulty using the tool Lack of a Basic Test Process or Understanding of What to Test Lack of Configuration Management Processes Lack of Tool Compatibility and Interoperability Lack of Tool Availability

Choosing the right tool

Take time to define the tool requirements in terms of technology, process, applications, people skills, and organization. During tool evaluation, prioritize which test types are the most critical to your success and judge the candidate tools on those criteria. Understand the tools and their trade-offs. You may need to use a multi-tool solution to get higher levels of test-type coverage. For example, you will need to combine the capture/play-back tool with a load-test tool to cover your performance test cases. Involve potential users in the definition of tool requirements and evaluation criteria. Build an evaluation scorecard to compare each tool's performance against a common set of criteria. Rank the criteria in terms of relative importance to the organization.

Approaches to Automation

There are three broad options in Test Automation: Full Manual Partial Automation Full Automation Reliance on manual testing Redundancy possible but requires duplication of effort Reliance on automated testing Responsive and flexible Flexible Relatively inflexible Inconsistent Consistent Very consistent Required for automation - - Low implementation cost - High implementation cost Low skill requirements - High skill requirements High repetitive cost Automates repetitive tasks and high return tasks Economies of scale in repetition, regression etc Fully manual testing has the benefit of being relatively cheap and effective. But as quality of the product improves the additional cost for finding further bugs becomes more expensive. Large scale manual testing also implies large scale testing teams with the related costs of space, overhead and infrastructure. Manual testing is also far more responsive and flexible than automated testing but is pro

Software Test Automation

Automating testing is no different from a programmer using a coding language to write programs to automate any manual process. One of the problems with testing large systems is that it can go beyond the scope of small test teams. Because only a small number of testers are available the coverage and depth of testing provided are inadequate for the task at hand. Expanding the test team beyond a certain size also becomes problematic with increase in work over head. Feasible way to avoid this without introducing a loss of quality is through appropriate use of tools that can expand individual’s capacity enormously while maintaining the focus (depth) of testing upon the critical elements. Consider the following factors that help determine the use of automated testing tools: • Examine your current testing process and determine where it needs to be adjusted for using automated test tools. • Be prepared to make changes in the current ways you perform testing. • Involve people who will be

Types of Test Reports

The documents outlined in the IEEE Standard of Software Test Documentation covers test planning, test specification, and test reporting. Test reporting covers four document types: 1. A Test Item Transmittal Report identifies the test items being transmitted for testing from the development to the testing group in the event that a formal beginning of test execution is desired Details to be included in the report - Purpose, Outline, Transmittal-Report Identifier, Transmitted Items, Location, Status, and Approvals. 2. Test Log is used by the test team to record what occurred during test execution Details to be included in the report - Purpose, Outline, Test-Log Identifier, Description, Activity and Event Entries, Execution Description, Procedure Results, Environmental Information, Anomalous Events, Incident-Report Identifiers 3. Test Incident report describes any event that occurs during the test execution that requires further investigation.Details to be included in the repor

What does the tester do when the defect is fixed?

Once the reported defect is fixed, the tester needs to re-test to confirm the fix. This is usually done by executing the possible scenarios where the bug can occur. Once retesting is completed, the fix can be confirmed and the bug can be closed. This marks the end of the bug life cycle.

How descriptive should your bug/defect report be?

You should provide enough detail while reporting the bug keeping in mind the people who will use it – test lead, developer, project manager, other testers, new testers assigned etc. This means that the report you will write should be concise, straight and clear. Following are the details your report should contain: -              Bug Title -              Bug identifier (number, ID, etc.) -              The application name or identifier and version -              The function, module, feature, object, screen, etc. where the bug occurred -              Environment (OS, Browser and its version) -              Bug Type or Category/Severity/Priority o      Bug Category: Security, Database, Functionality (Critical/General), UI o      Bug Severity: Severity with which the bug affects the application – Very High, High, Medium, Low, Very Low o      Bug Priority: Recommended priority to be given for a fix of this bug – P0, P1, P2, P3, P4, P5 (P0-Highest, P5-Lo

How is a defect reported?

Once the test cases are developed using the appropriate techniques, they are executed which is when the bugs occur. It is very important that these bugs be reported as soon as possible because, the earlier you report a bug, the more time remains in the schedule to get it fixed. Simple example is that you report a wrong functionality documented in the Help file a few months before the product release, the chances that it will be fixed are very high. If you report the same bug few hours before the release, the odds are that it won’t be fixed. The bug is still the same though you report it few months or few hours before the release, but what matters is the time. It is not just enough to find the bugs; these should also be reported/communicated clearly and efficiently, not to mention the number of people who will be reading the defect.  Defect tracking tools (also known as bug tracking tools, issue tracking tools or problem trackers) greatly aid the tes

What is a defect?

As discussed earlier, defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations.  It is a flaw in the software system and has no impact until it affects the user/customer and operational system.

What are the defect categories?

      With the knowledge of testing so far gained, you can now be able to     categorize the defects you have found. Defects can be categorized into different types basing on the core issues they address. Some defects address security or database issues while others may refer to functionality or UI issues.      Security Defects: Application security defects generally involve improper handling       of data sent from the user to the application. These defects are the most severe and given highest priority for a fix.     Examples:   -           Authentication: Accepting an invalid username/password -           Authorization: Accessibility to pages though permission not given Data Quality/Database Defects: Deals with improper handling of data in the database.    Examples:       -           Values not deleted/inserted into the database properly     -           Improper/wrong/null values inserted in place of the actual values     Critical Functionality D

What is a defect?

As discussed earlier, defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations.  It is a flaw in the software system and has no impact until it affects the user/customer and operational system.

Test Case Design Techniques

The test case design techniques are broadly grouped into two categories: Black box techniques, White box techniques and other techniques that do not fall under either category. Black Box (Functional) White Box (Structural) Other Specification derived tests Branch Testing Error guessing Equivalence partitioning Condition Testing Boundary Value Analysis Internal boundary value testing State-Transition Testing

Test Case – Sample Structure

The manner in which a test case is depicted varies between organizations. Anyhow, many test case templates are in the form of a table, for example, a 5-column table with fields: Test CaseID Test Case Description Test Dependency/ Setup Input Data Requirements/ Steps Expected Results Pass/Fail

What is a Test Strategy? What are its Components?

         Test Policy - A document characterizing the organization’s philosophy towards          software testing    Test Strategy - A high-level document defining the test phases to be performed and the     testing within those phases for a programme. It defines the process to be followed in each project. This sets the standards for the processes, documents, activities etc. that should be followed for each project. For example, if a product is given for testing, you should decide if it is better to use black-box testing or white-box testing and if you decide to use both, when will you apply each and to which part of the software? All these details need to be specified in the Test Strategy.   Project Test Plan - a document defining the test phases to be performed and testing within those phases particularly projects            A Test Strategy should cover more than one project and should address the following issues: An approach to testing high

Types of errors with examples

  ·          User Interface Errors: Missing/Wrong Functions, Doesn’t do what the user expects, Missing information, Misleading, Confusing information, Wrong content in Help text, Inappropriate error messages. Performance issues - Poor responsiveness, Can't redirect output, Inappropriate use of key board ·          Error Handling: Inadequate - protection against corrupted data, tests of user input, version control; Ignores – overflow, data comparison, Error recovery – aborting errors, recovery from hardware problems. ·          Boundary related errors: Boundaries in loop, space, time, memory, mishandling of cases outside boundary.  ·          Calculation errors: Bad Logic, Bad Arithmetic, Outdated constants, Calculation errors, Incorrect conversion from one data representation to another, Wrong formula, Incorrect approximation. ·          Initial and Later states: Failure to  - set data item to zero, to initialize a loop-control variab

Testing Terms

·          Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug. ·          Error:  A mismatch between the program and its specification is an error in the program. ·          Defect:   Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations.  It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems. ·          Failure: A defect that causes an error in operation or negatively impacts a user/ customer. ·          Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, s

Testing Levels and Types

There are basically three levels of testing i.e. Unit Testing, Integration Testing and System Testing. Various types of testing come under these levels. Unit Testing To verify a single program or a section of a single program Integration Testing To verify interaction between system components Prerequisite: unit testing completed on all components that compose a system System Testing To verify and validate behaviors of the entire system against the original system objectives Software testing is a process that identifies the correctness, completeness, and quality of software. Following is a list of various types of software testing and their definitions in a random order: · Formal Testing : Performed by test engineers · Informal Testing: Performed by the developers · Manual Testing : That part of software testing that requires human input, analysis, or evaluation. · Automated Testing : Software testing that utilizes a variety of tools to automate the testing process. Automa