BLOG
qa BLOG

Understanding quality assurance in software engineering

Quality assurance, or QA, is an often-misunderstood engineering term. Developers and clients tend to question the need for quality assurance as part of their app development and there are many misconceptions around what quality assurance testing is, how it benefits the final product and why there is a need to test the application and code in the first place.

To clear up some of the most common misconceptions and to get a better idea of what quality assurance is all about, Glyn Roberts spoke to Axana Skinder, Head of the QA Department at iTechArt. Axana has more than 15 years of experience in QA, eight of those at iTechArt.

Glyn: What is quality assurance?

Axana: Quality assurance aims to help prevent the appearance of bugs in applications. The quality assurance team works to make the application as good as it can be. They work on every area of the application, not just the code. They look at the idea for the application, how it’s made and whether the user’s needs are met. It’s a full overview of the quality of use.

For me, my goal is to help clients to establish key testing processes to get their products to a better quality.

Glyn: What kind of organisation might need a quality assurance engineer?

Axana: I work with all kinds of companies, from startups to large enterprises. We often work with companies who develop products that might produce financial risk, ecological risk or physical risk, such as healthcare companies or financial institutions. The level of risk is very different when you are producing a photography application compared to when you are creating an application for an airline for example, so there is a greater need for quality assurance. The severity of a bug in the application is much higher if it might lead to significant financial loss or cause physical harm.

Quality assurance is also useful for companies who need to protect their brand reputation. Occasionally we work with companies who have a product that is already in production and who are receiving negative feedback about how their product is functioning and the issues users are experiencing. In this case we need to ensure that the entire application begins to perform better and that users don’t incur further issues.

Glyn: Could app developers and creators just test the application themselves instead of using quality assurance?

Axana: Generally speaking, product owners test their applications quite well, but they are testing to see if the product works properly. They verify critical scenarios and common business scenarios. Quality assurance testers actually have the opposite goal. They’re trying to find bugs. They are not trying to make the application work; they are trying to find the use case scenarios in which it doesn’t work and problems arise. Testers will run through several different scenarios and their job is to ask ‘what if?’ What if we open this web application on different browsers? What if we press the disable button? What if a call comes in while we’re using the application?

Product owners know what their application does, and they test that it works. But as soon as you open the application up to a wider audience, you will find that people behave in a range of different ways that you hadn’t even imagined. To give you an example, we were testing an application for children 3-11 years old, and we received a bug report saying that a child had licked the screen on the application and it had crashed. This is the kind of bug that will arise when the application goes out to the public, and as quality assurance engineers, we have to be alert to testing for a wide range of scenarios that product owners might not think about. We use a combination of imagination and experience to think about different ways that people could use the application.

Remote teams

Glyn: Are the developers who created the code able to carry out the quality assurance testing? Why do you need separate team members?

Axana: Developers should take charge of testing their own code and they should be responsible for making sure that it works properly, but that’s very different to thinking through different ways that the application could be used. If a developer creates a simple login screen, they are responsible for testing whether it works when someone enters a username and password and hits enter. If that works then they’ve done their job, but when that login screen is part of the wider app, quality assurance engineers ask the bigger ‘what ifs’. What if the fields are left blank? What if we enter the existing username but the wrong password? What if we log in and then come back to this screen and try to log in again? What if I leave a space after the password?

We’re not just looking for problems with the code, we’re looking for problems with the requirements of the overall application. For example, we have a test type called end-to-end testing, which is one of my favourite types of test. We behave like the end user and we enter the application and follow the steps a typical user might follow, but we ask ourselves all of the ‘what if’ questions. The code could be written perfectly, but you could still find bugs when you look at all of the different ways the application could be used.

Testers also look at how the application functions on different devices. There is a huge amount of diversity in Android devices in particular, and that means a lot of different companies who might add something to their operating system. Some operating systems allow you to use a stylus, some allow you to add keyboards, so that opens up a huge list of possible scenarios that we want to verify.

A developer’s job is to write code. If they spend all of their time testing, then they can’t create code. It makes more sense to have them do quick tests to verify the code, and then hand over to quality assurance engineers who can really dig around and find any bugs.

Glyn: What are the key tests that quality assurance teams normally undertake?

Axana: That depends on the type of application. For web applications we need to run positive tests where we perform all of the ‘allowed’ behaviour in the application, and then negative tests where we perform all of the actions that aren’t anticipated or ‘allowed’. We also do end-to-end testing and we are likely to do a range of tests that relate to web compatibility and compatibility with different browsers.

For mobile applications we need to check how the application behaves in a wider variety of situations: when the internet is slow or there is no internet connection, when the user switches between wi-fi and 4G or 5G, when the flight safe mode is on. We also need to check how the application works when there is an incoming call or if the charger gets plugged in, how it reacts to headphones, or if the screen gets locked. For mobile applications it’s also important to check different screen size resolutions and different memory sizes.

We have more than 100 test types for mobile applications. We tend to pick the most appropriate tests before we begin quality assurance testing. We usually select 5-10 test types based on the ones that we feel are most likely to give us the most bugs, because that’s how we can provide the most value from the testing.

Glyn: What’s the difference between manual and automated testing?

Axana: Automated testing means engineers write code that will be used to test another code. Manual testing means that the engineer is performing each individual test themselves.

Imagine that we have an application that is a calculator and we just want to make sure that it works. In this case it makes sense to create automated tests. We can create a script that runs automatically so that every 30 minutes we have performed 1,000 calculations and verified the results. These are the kinds of situations where automated testing is really useful and saves a lot of time.

Automated testing is not a good option when we need to verify the user interface or when we need to verify lots of different smaller elements like font, colour etc. It would take longer to write the test script than just to do it manually.

Manual testing and automated testing both have a place in quality assurance, but it depends on what you’re trying to achieve. When an application is still in development, manual testing makes more sense, because changes are still being made. If we create an automated testing script and then the code we’re testing changes, we will need to rewrite the whole script, so manual testing is also the better option during development phases.

Glyn: At what point in the development process should a QA engineer join your team?

Axana: Quality assurance engineers could be part of the team from the very beginning if you want to prevent bugs during the build process, but it’s also possible to add them later in the process if you just want to test the final application.

However, always remember that quality assurance teams need some time to get familiar with the requirements and to prepare their lists of tests and checklists. This will help them to approach the testing in a more coherent and efficient way. You also need to think about leaving time for the bugs to be fixed. If you leave the testing too late you could be in a situation where you need to consider pushing your launch date back, so bringing your testers in early enough is really important.

Glyn: What is a test plan and why is it important?

Axana: A test plan is simply a document that outlines the agreement between the client and the testing team. It sets out clear expectations around what is being tested, what the scope is, where it will be tested and what different tools we’re using.

When we do rapid testing or agile testing, sometimes there is no time to set out a full test plan, but it’s really important to still have clear agreements. There is no such thing as being finished with testing. We can only stop testing. That means that we need to agree on what needs to happen for us to stop. It comes down to a business decision around the value of testing. We can keep going and keep finding very small, perhaps insignificant bugs, but we need to weigh up the value and find a good balance between ensuring the application is usable and bringing value for money.

We also need to agree on what to do if critical flaws or errors are found. Do we stop all other tests or do we continue? Finally, it’s useful to agree on a handover and documentation plan. How will we document the testing we have done?

Just like the testing itself, when we create a test plan we ask a lot of ‘what if’s’ and we come to an agreement that weighs up the value of quality assurance against the cost.

Glyn: What are your top tips for starting out with QA?

Axana: Make sure you start with really clear goals, and then make sure that every team member understands those goals. Does the client want to focus on UI, or on production issues, or end user testing? You need to be clear on your priorities and then share those with the whole team.

Secondly, quality assurance engineers and developers need to work closely together and be in good communication. Ultimately, the developers create the app and understand the code and what they want to achieve. The quality assurance team simply ensures that they have achieved that and helps them find any gaps or bugs so that their original vision is carried out.

Other than that, I would say bring your QA team in early, give them a clear brief about what you want your application to do, and ensure you’ve allowed time for the bugs that they find to be rectified. If you want to ensure that your application is working properly, isn’t prone to bugs and is ready for users who will use it in all kinds of unexpected ways, quality assurance can be a really useful part of your development process.

Let's get rolling
Drive your business, and get solutions,
not just technologies.
close
Have a project in mind?
We'll help you develop this idea into a great solution.
Give us a shout!