This lesson on the fundamentals of software testing, will help you learn the fundamental concepts and terminologies in software testing as required by the ISTQB.
In the coming sections we will learn about software systems and defects. First, let us begin with its overview and a few examples.
Read more: Top 16 Types of Software Testing and How to Use Them?
Software Systems Overview and Examples
Software systems are an integral part of life. They help run critical applications like hospital tools, daily business operations like ATMs, and consumer products like televisions or smartphones.
Any functionality issue in software can lead to severe impacts like loss of life, money, time, and reputation, in case of companies. Defects in software systems can, therefore, cause a significant effect on our day-to-day lives. Now let us look at a few real-life examples over the years.
- In the 1980s, software defects in the code controlling the Therac-25 radiation therapy machine were directly responsible for some patient deaths.
- In 1996, the US$1 billion prototype Arianne 5 rocket of the European Space Agency was destroyed less than a minute after launch due to a bug in the onboard guidance computer program.
- In 2011, Honda Company was forced to recall 2.49 million cars, small SUVs, and minivans worldwide, including its popular Accord sedan, to repair a software problem that could damage the automatic transmission.
Such issues can impact the reputation of a company and lead to substantial costs for software replacement. Now, let us look at examples of software defects and how they impact life.
In an incident, fire department paramedics placed a woman on oxygen at the start of the trip and confirmed that the onboard oxygen system was operating normally.
However, the oxygen system stopped working for approximately eight minutes before the paramedics noticed it. They immediately restarted the system; however, by the time they did, the woman was dead.
After the incident, the fire department changed all the electrical equipment; yet, weeks later, the same malfunction occurred with the new equipment.
An independent investigation pointed to a software problem with the ambulance oxygen system. The fire department is now using portable oxygen until the ambulance company provides a fix. After these examples, let us find out the categories of software defects in the next section.
Improve Your Project Management Skills
PRINCE2® Foundation & Practitioner CertificationExplore Course
Categories of Software Defects
Typical examples of software defects include defects that impact individuals and defects that impact the society at large. Let us discuss them separately.
Some examples of defects that impact individuals include defects in the monthly bills. Any minor defect in the software generating these bills can lead to over or underpayment of bills, causing losses to the billing company. Another common defect in this category is defects in salary computations.
Other examples are defects in ATM withdrawal amounts, and amount of waiting time at traffic lights, phone booths, and petrol filling stations. Wherever there is software being used, there is a chance of software defects.
While defects that impact individuals are lesser in impact, they still cause inconvenience to users and can lead to loss of brand name or even legal issues for the organization. Let us now look at some defects that impact the society at large.
Impact on Society
For example in the railways, a bug in the automated system could lead to train collisions and a loss of life and property. Similarly, defects in airline software, nuclear reactors, or stock exchange software can have a huge impact on the public.
In the following section, we will discuss the causes of software defects.
Causes of Software Defects
Mistakes or errors are made by software developers during production and cause defects in the software. These defects lead to software failure.
The software has chances of errors as they are manually designed. The errors produce defects or bugs in the software. These defects can be introduced during the coding phase of the software and throughout the development lifecycle. In the development life cycle, errors can occur at the beginning phase where requirements are understood, written, or designed.
Errors can also result from a mistake while porting the application into production. If the faulty system is executed, it might cause a failure.
However, every mistake does not lead to a defect, neither does every defect lead to a failure. Sometimes, defects lie dormant within the software till they are triggered.
We will discuss this in the following sections.
Causes of Software Defects (contd.)
Let us look at the standard causes that introduce defects into the software.
First is poorly documented requirements. Since requirements are the starting point in software development, any defect introduced at this phase gets inbuilt into the subsequent phases.
Often requirements are not clearly thought and contain gaps in the thought process. Even if requirements are clearly understood, the way they are defined can lead to defects.
Also clearly defined documents, when handed to different teams, can often lead to different interpretations if teams are not trained to understand requirement documents.
Often, insufficient time is provided during development to complete coding and testing. This is due to the business demands of launching the application in the market. This leads to defects being introduced and missed out during the testing phase.
Other common causes of defects include complex architecture or code, lack of domain knowledge, and technical limitations like programming language constraints.
Let us find out the consequences of software defects in the following section.
Improve Your Project Management Skills
PRINCE2® Foundation & Practitioner CertificationExplore Course
Consequences of Software Defects
It can be argued that if a mistake does not lead to a defect or a defect does not lead to failure; then the mistake is unimportant.
For example, due to an error in the software that controls the traffic signals at a busy crossroad, all the directions see the red signal between 00:00 AM to 0:15 AM every day. This may not lead to a failure as signals are set as blinking orange light during this period.
However, if the same error sets the green signal between 12:00 Noon to 12:15 PM every day, this leads to a signaling failure and may cause significant accidents.
After understanding the different aspects of software defects, let us move on to the next topic, ‘Overview of Software Testing,’ in the following section.
Overview of Software Testing
In the next few sections, you will get an overview of software testing and discuss the standard terms, roles, objectives, and principles of software testing.
Read more: How to Build a Career in Software Testing?
Let us begin with defining software testing in the following section.
Definition of Software Testing
While testing has no single standard definition, some popular ones are:
Glenford J Myers; an American author, computer scientist, and entrepreneur; defines testing as—“the process of executing a program or part of a program with the intention of finding errors.”
Institute of Electrical and Electronic Engineers (IEEE 83a) standard defines testing as—“the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements.”
Another definition of testing states that—“testing is the process of analyzing a system to detect the difference between existing and required conditions and to evaluate the features of the system.”
To sum up, software testing is the act of “verifying if the software behavior is as expected.”
After defining software testing, let us look at why it is needed in the next section.
Need for Software Testing
A study conducted by National Institute of Standards and Technology (NIST) in 2002 reported that software bugs cost the U.S. economy 59.5 billion dollars annually.
More than one-third of this cost could be avoided if better software testing was performed. Therefore testing is necessary as some errors can turn out to be expensive or dangerous.
Every product needs to be checked to ensure there are no errors. If developers check their own product, there are chances that they might miss errors due to bad assumptions or blind spots. It is advisable to get the product checked by another individual who was not involved in product development.
It is important to check the severity of the error and its consequences, as well.
For software systems, some errors are important while others are not. You need to determine the impact of a software error. For this, consider the context within which the different software systems operate.
Now that we have established the need for software testing let us list some common software testing terms in the following section.
Earn the Most Coveted DevOps Certification!
DevOps Engineer Masters ProgramExplore Program
Common Testing Terms
The terms commonly used in testing are the following:
Debugging is a part of the development activity that identifies, analyzes, and removes defects. Debugging is performed by Developers on their piece of code.
Testing is the activity of identifying defects and is performed by Testers.
Testing is done by testers in an environment similar to production. There are different kinds of software testing levels, types, and techniques. These two terms, Debugging and Testing, are often confused and used interchangeably.
However, they are not the same and are used by separate teams to identify different kinds of defects.
A review can be performed on deliverables like documents, code, test plan, and test cases. While testing can only be done when the executable code is ready, reviews can be done on different kinds of documents and at all stages of development.
Reviews are commonly referred to as a static testing technique as they are done without executing the code. Reviews are very important for each software or product as finding a defect early will reduce its development cost and time.
In the next section, let us discuss the roles of software testing.
Role of Software Testing
Rigorous testing is necessary during software development and maintenance to
- Identify defects
- Reduce failures in the operational environment
- Increase quality of the operational system.
- meet contractual or legal requirements
- Meet industry-specific standards, which may specify the type of techniques that must be used or the percentage of the software code that must be executed.
In the following section, we will look at the objectives of software testing.
Objectives of Software Testing
Following are the objectives of software testing:
- Finding defects which prevent the probability of their occurrence in production
- Gaining confidence in the quality of the software application.
- Providing information helps GO or NO GO decision-making while moving to the next phase.
- Defect analysis in one phase can also help identify the root cause and prevent defects in the subsequent phases.
Let us now find out the objectives of different types of testing in the next section.
Objectives of Different Testing Types
Each type of testing has its specific objectives. Let us look at the different types of testing and their respective objectives.
The objective of development testing, also known as a unit or component testing, is to find maximum defects early in the development lifecycle. Fixing the defects at an early stage saves defect leakage cost and time.
User Acceptance Testing
User acceptance testing is performed with the objective of confirming whether the system works, as expected by the end users. This is the final stage of testing before deploying the code to production.
The objective of Maintenance testing is to ensure no new defects have been introduced, especially in the case of enhancements and/or defect fixes.
The objective of Operational testing is to ensure reliability and performance. Software should be tested to check whether it works satisfactorily even with the maximum expected workload.
Let us discuss the seven principles of testing in the next section.
Seven Principles of Testing
There are seven principles of testing, which have evolved over 40 years and can be used as a general guideline for all testing.
The first principle states that testing can show that defects are present; however cannot prove that there are no defects. Testing reduces the probability of residual defects or defects remaining in the software. Even if no defects are found, it does not mean that the system is 100% defect free.
Exhaustive testing, also known as complete testing, is a test approach in which the test suite comprises all combinations of input values and preconditions.
The second principle states that testing all combinations of inputs and preconditions is not feasible, except in trivial cases. Instead, risks and priorities are used to focus on testing efforts.
The third principle states that errors identified late in the development process are more costly to resolve. Hence testing activities should start as early as possible in the Software Development Life Cycle (SDLC) and focus on the defined objectives.
Defect removal costs increase considerably as you move up the software life cycle. If the errors made in the early phases are undetected, the impact is more complex in the later phases of the lifecycle.
Let us suppose that one requirement has been misunderstood and designed incorrectly.
The graph below shows the cost of fixing this defect when caught at different stages.
If the defect is caught in the unit testing, low-level design, or coding phase, it needs to be corrected, and unit testing is repeated.
Alternatively, if the defect is caught in user acceptance testing, low-level design and coding need to be corrected, unit testing repeated, and additionally, user acceptance re-testing and regression testing also performed.
This indicates that the later the defect is caught in the life cycle of a project, the higher is the cost associated with fixing it. Therefore it is important to find all the defects as early as possible.
The fourth principle of testing is based on the Pareto principle or 80–20 Rule, which states that 80% of defects are caused by 20% of causes.
Once the causes are identified, efficient test managers are able to focus testing on the sensitive areas, while still searching for errors in the remaining software modules.
The fifth testing principle states that a variety of tests and techniques should be used to expose a range of defects across different areas of the product.
Using a set of tests repeatedly on the same software product will decrease the efficiency of tests.
The sixth principle states that different software products have varying requirements, functions, and purposes; so same tests should not be applied across the board.
Higher the probability and impact of damage caused by a failed software, greater the investment in performing software tests.
The seventh and final testing principle states that to ensure adequate software testing procedures are performed in every situation, testers should assume that all software contains some concealed faults, as undetected errors do not mean the software is error-free.
These seven principles of testing should be guides for all test engineers to help them plan and execute their tests. These principles are also useful for the management to understand testing and develop a realistic expectation from the test process.
After an overview of software testing, let us move on to the next topic, ‘Software Testing Process,’ in the following section.
Operations Manager or Auditor? Your Choice
Lean Six Sigma Green Belt Training CourseExplore Program
Software Testing Process
In the next few sections, we will understand the software testing process.
Let us begin with the relationship between testing and quality in the next section.
Testing and Quality
When defects detected by the testing process are fixed, the quality of the software system increases.
The interrelationship between testing and quality is illustrated below:
Testing is commonly perceived to be only about test execution. The typical activities performed to achieve test objectives are test planning, test specifications definition, test execution, test recording, and test reporting.
Let us discuss these activities separately.
Test planning is the initial stage of testing. All testing activities are planned, which include resource requirements regarding human resource, training, software, tools, timelines, and risk and mitigations.
In the test specification stage, the test scenarios, test conditions, and test cases are derived from business requirements documents. All these artifacts are combined using traceability, which helps in determining the requirements coverage achieved during the testing effort. Traceability also helps in the maintainability of the test deliverable in case of changes to a requirement. Prioritization of test case execution is also done at this stage, considering business risks associated with each requirement as the main factor.
During test execution, the tester executes all planned test cases and verifies whether the expected results match with the actual results. This can be performed manually or automatically using appropriate tools based on the type of testing.
Test recording is the process where, as a proof of execution, the tester documents all test results as test log.
Test reports are retrieved either manually or automatically.
Test reports help in understanding the progress in testing made till date or at a frequency defined in the test plan.
Test recording is used to evaluate the quality of the software.
Testing for a phase can be considered complete once the exit criteria defined in the test plan is met.
Test closure report is an important deliverable before the testing activity is considered complete.
This can be at the end of the testing phase or at the end of entire project testing in different phases.
Document review is also considered as part of testing. It helps in identifying defects in the early phases of testing when the executable code is under development.
Document review can help find missing or incorrect requirements, and defects in the design and even in the architecture of the application.
In the next section, we will discuss the risks involved in testing and mitigation.
Operations Manager or Auditor? Your Choice
Lean Six Sigma Green Belt Training CourseExplore Program
Risk Involved in Testing and Mitigation
All possible scenarios cannot be tested; just as how the end user uses the product cannot be predicted. The features impacted by the latest code changes are also unknown. Considering these facts, the decision to move a product to the next phase of the software development lifecycle is always accompanied by risk.
Let us see how these risks can be controlled.
- Risks can be mitigated with automated testing that has significant coverage.
- Acceptance level tests with commonly used features in the product can also help reduce the risk.
- Test reporting should provide sufficient information to stakeholders to make decisions for the next development step or release to customers.
- The effort used in quality assurance and testing activities needs to be tailored according to the risks and costs associated with the project.
- Due to the limit in the budget, time, and testing; make a decision on how to focus the testing based on the risks.
In the following section, we will find out when to stop testing.
Timeline to Stop Software Testing
In software testing, it is important to know when to stop the process. If the aim is zero software defects, the testing process may never get completed.
In the figure given below, the Y-axis depicts the value of testing, and the X-axis depicts the cost of testing.
At the onset of the testing process, the cost of testing is less; however, the value delivered is very high as there are a large number of critical defects in the system.
With time, the cost of testing keeps increasing due to the addition of resources.
However, the value of testing drops as most critical defects have already been addressed in the previous cycles.
Value can also drop as some identified defects may not be significant enough to cause software failure.
In addition to evaluating risks associated with testing, it is also wise to consider the cost-benefit analysis to decide when to stop testing.
When the “value delivered from testing becomes less than the cost incurred to run the tests,” the testing should be stopped. This has been depicted by the red arrow in the figure on the section.
We will look at an example to elaborate this concept in the following section.
Timeline to Stop Software Testing – Example
Three months into testing of a major release of the online railway reservation system, the Test Manager was still not confident of the quality of the release.
The holiday season was fast approaching, and the Manager was under pressure to release the product, to support the load of holiday bookings. How should the Test Manager decide whether to give the go-ahead for the release of the product?
The Test Manager needs to analyze the following information for decision-making:
- Whether all the critical functionalities have been thoroughly tested;
- If there are any open critical defects;
- Whether there are more critical defects being reported in the ongoing testing;
- The risk involved if the application is released without further testing; and
- If the application is ready to handle the load of the holiday season.
If all the above criteria are met, the Test Manager should give the go-ahead to the release of the product. The business opportunity owing to the holiday season warrants that the application is released as long as there is only a minimal risk involved.
In the following section, we will discuss the fundamental test process.
Fundamental Test Process
You have previously learned about the test activities performed at a high level to meet the defined testing objectives; let us now find out how these activities are mapped to the different phases of a test lifecycle.
As you can see below, these phases are:
- Test Planning and Control
- Test Analysis and Design
- Test Implementation and Execution
- Evaluating Exit Criteria and Reporting
- Test Closure
It is important to note that while these phases are sequential, they are also iterative in nature.
For example, during test execution, there may be a need to go back to test design to introduce more test cases or test data before the test execution process is resumed.
Alternatively, during exit criteria evaluation, it can be decided to execute some more tests before the application is considered fit for release. Hence, all phases interact and might transition from one to the other based on the needs of the project.
We will discuss each phase separately in the next few sections.
Let us begin with the first phase of the test process, which is ‘Test Planning and Control,’ in the following section.
Phase 1 – Test Planning and Control
In the Test Planning and Control phase, you need to ensure the goals and objectives of the customers, stakeholders, and project are understood. Additionally, evaluate the risks of the system to be addressed by testing.
Based on this understanding and as a part of test planning, specify the objective of testing, and determine the scope and risk.
The next activity is to design test strategy, identify the resource requirements, schedule test analysis, and design tasks.
Then, plan for test implementation, execution, and evaluation, and also determine the exit criteria for testing.
As a part of planning, you also need to plan for test controls, which will help in measuring the progress against the plan, and in taking corrective actions as and when required.
In the planning phase, we also identify the design of the test environment and identify the required infrastructure and tools. This includes testing and support tools such as spreadsheets, word processors, project planning tools, and non-IT tools and equipment.
In the next section, let us look at the next phase of the test process, which is ‘Test Analysis and Design.’
Phase 2 – Test Analysis and Design
The Test Analysis and Design phase involves a review of the test basis and the identification of test conditions.
Let us take a closer look at both these activities.
Review of the test basis includes a review of product requirements, architecture, design specifications, and interfaces between the products. It also includes examining the specifications for the software being tested.
All these artifacts are called “test basis” as these are used as a basis for defining what and how you should test.
The designing of black-box tests can begin before the code is developed.
As the test basis is studied, gaps and ambiguities in the specifications are identified. These gaps and ambiguities occur when there is an attempt to identify incidents arising at every point in the system and helps us in preventing defects that may appear in the code.
After understanding the specifications, identify test conditions based on the analysis of test items, and their specifications and behavior.
The test conditions provide a high-level list of area of interest in testing. In testing, use the test techniques to define the test conditions.
We will continue our discussion of the activities of this phase in the following section.
Phase 2 – Test Analysis and Design (contd.)
The requirements and system are also evaluated for testability in the Test Analysis and Design phase.
The requirements may be written in a way that allows a tester to design tests but may not be testable.
For example, if the performance of the software is important, it should be specified in a testable way.
If the requirement is specified as “the software needs to respond quickly,” it is not testable, as “quick” can be interpreted in more than one way. A more testable requirement could be, “the software needs to respond in 2 seconds with 10 people logged on”.
The testability of the system depends on various aspects of the feasibility of setting up the system in a surrounding that matches the operational environment. The other aspects of test analysis and design phase are comprehensibility and testability of all the possible configurations and uses of a system.
In the next section, let us look at the next phase of the test process, which is ‘Test Implementation and Execution.’
Phase 3 – Test Implementation and Execution
During the third phase, Test Implementation and Execution, the test conditions designed are taken and set up as tests. The test environment is also set up before executing the tests.
Implementation includes prioritizing the test cases, using techniques and test approach, and creating test suites from test cases for efficient test execution. You need to ensure the test environment has been set up correctly by running specific tests on it if possible.
Execution is running the test suites and individual test cases, following the pre-defined test procedures. This is done manually or by using test execution tools according to the planned sequence.
At the end of the execution of each test case, log the outcome and record the identities and versions of the software under test, test tools, and test ware.
You also need to know what tests are used against which version of the software, report defects against specific versions, and maintain the test log to provide an audit trail. Then, compare actual results that is what happened when the test was run, with expected or anticipated results.
For differences between actual and expected results, report discrepancies as incidents.
Once the discrepancies have been fixed, repeat test activities to verify whether the fix has resolved them.
Re-execute previously failed tests to confirm a fix is working. This is also known as confirmation testing or re-testing.
Test that the fix did not introduce defects in unchanged areas of the software and that fixing a defect did not uncover other defects. This is called regression testing.
In the next section, we will discuss the fourth phase, which is ‘Evaluating Exit Criteria and Reporting.’
Phase 4 – Evaluating Exit Criteria and Reporting
After the end of test execution, the evaluate phase begins.
In this phase, you need to measure the results of test execution against the test objectives and broadcast the test summary report to stakeholders.
Return to test execution if the test objectives have not been met and some more testing needs to be conducted.
Then, prepare the test summary report, which is a document summarizing testing activities and results. The test summary report also contains an evaluation of the corresponding test items against the exit criteria.
In the next section, we will discuss the fifth and final phase of the testing process, which is ‘test closure.’
Phase 5 – Test Closure
When testing is considered complete, you can move into the Test Closure phase.
In this phase, you need to perform a check that all deliverables have been accepted and signed off, archive the test ware, close the environment, analyze lessons learned, and use the information to improve the test maturity.
Let us now move on to the next topic, ‘Psychology and Ethics of Software Testing’ in the following section.
Psychology and Ethics of Software Testing
In the next few sections, we will understand the psychology and ethics of software testing.
Let us begin with the psychology of testing in the next section.
Psychology of Testing
After understanding the complete process of testing, let us now look at the psychology for effective testing.
The degree of independence is vital in performing effective testing.
As seen in the figure on the section, the quality, and effectiveness of testing increase with an increase in the degree of independence.
Let us understand what independence in the context of testing means.
If tests are designed by the person who wrote the software, it will provide a shallow level of independence. This is because the person who wrote the software tends to overlook the defects or may repeat the mistakes made while coding.
Also, Developers may not want to see any defects in their code, so they execute the codes with an intention to get correct results rather than “breaking the system.”
Hence, the quality of testing is relatively low.
When tests are designed by another person but from the same development team, they share a similar mindset. However, since the person is not the owner of the software, there is an increased chance to identify the mistakes of the Developer.
In this case, the independence of testing is comparatively more, thereby increasing the effectiveness of testing.
If tests are designed by a person from a different organizational group, for example, by an independent test team, then the degree of independence will be higher.
In this case, they will not be biased as they are not the creators of the code while at the same time they will have a mindset to “break the system.” As they evaluate software against the requirement, they will be able to catch many more defects including requirements, design, and code-related defects.
If tests are designed and carried out by a person from a different organization or by a company that is a certified external body, they would in addition to a mindset of identifying the defects, also bring in experience in other similar domain, technology, and type of testing.
Hence the quality of testing will be at its best in this scenario.
In the following section, we will compare the mindset of a Developer with a Tester.
The Mindset of Developer vs. Tester
The Developer always thinks that there are no defects in the code as it was carefully developed. Developers may misinterpret the business requirement, hence assume that the code is functioning the way it is supposed to.
Testers have to think like a destroyer. They always think that there are defects in code, waiting to be uncovered. Testers view requirements from the end-user’s point of view and hence are able to identify user-related defects.
After discussing the mindset of Developers and Testers, let us find out how this gap can be bridged in the following section.
Bridging the Gap
To make any software testing successful, there should not be any difference between the teams involved in building the software. The following considerations can help bridge the gap between Developers and Testers:
- Both teams share a common goal, which is the betterment of the system under test.
- Testers should ensure they are raising the defect against the system and not individuals.
- Both should try and reach a common understanding of defects.
- Any incorrect understanding of requirements should be discussed and resolved. Business Analysts can be approached to assist in such situations.
In the next section, through an example, let us find out why a team spirit between a tester and a developer is important.
Importance of One Team Spirit – Example
A project cannot function smoothly without all teams in the project working towards a common goal.
For example, The Test Manager and the Development Manager of a large project were both nominated for promotion in an appraisal cycle.
However, the Project Manager of the project could only promote one of them.
Both the managers knew this and hence were constantly trying to put each other down to win the promotion. The Test manager tried to deliberately delay the project by citing critical defects and blaming the development team for the poor quality of code.
The Development Manager blamed the delay on the Test team. Due to this blame game, the project missed its crucial timeline.
In the next section, we will discuss the code of ethics for software testing.
Earn the Most Coveted DevOps Certification!
DevOps Engineer Masters ProgramExplore Program
Code of Ethics
Let us look at the Code of Ethics prescribed for Testers by ISTQB. The code of ethics concerns the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE).
Testers, as a part of their job, will have access to privileged information from these bodies.
Testers should follow the code of ethics to ensure that confidential information is not used inappropriately.
A certified Software Tester should:
- Not hurt public interest in any way
- Act according to the requirements of the client and employer
- Ensure that all deliverables on the product they test meet highest possible standards
- Maintain integrity and independence in judgment
- Promote and maintain an ethical approach
- Advance integrity and reputation of the profession
- Provide fair support and co-operation to colleagues
- Practice an attitude of lifelong learning and ethical approach to the profession
With this, we have reached the end of the lesson.
Let us now check your understanding of the topics covered in this lesson.
Curious about the CTFL course? Watch our Course Preview for free!
Here is a quick recap of what we have learned this lesson:
- The software is manually designed, therefore it is subject to errors.
- Testing is the process of executing a program or part of a program with the intention of finding errors.
- The different phases of a test life cycle are Test Planning and Control, Test Analysis and Design, Test Implementation and Execution, Evaluating Exit Criteria and Reporting, and Test Closure.
- The degree of independence is vital in performing effective testing.
Software testing as a profession is gaining more and more popularity each passing day. And if you wish to master this field, and in fact scale up a bit more, Simplilearn’s Automation Test Engineer Master's Program should be your next stop. Start learning now!