Given the cyclic nature of iterative development, Quality Assurance specialists (QAs) perform a very important role in the successful completion of builds. First, QA specialists clarify requirements, including supported devices and supported operating systems. Next, QA specialists test builds as they become ready. Finally, QA specialists suggest UX improvements throughout the entire product development lifecycle.
Let’s look at the role of Yalantis’s Quality Assurance specialists through each stage of the app development process.
At Yalantis, we engage QA experts long before the start of development. At early stages, they help us to identify problems with a product’s business logic, thus eliminating some potential problems even before we develop project documentation. This also reduces development costs.
Additionally, QA specialists help define and analyze an application’s features. And at this stage, the QA specialist is already thinking about how each feature will be tested.
Once the specification is ready, designers create wireframes. At this stage, QA specialists ensure that wireframes display all business logic described in the specification. They also check that everything is in compliance with the Google’s or Apple’s guidelines.
Traditionally, all quality assurance and testing activities are documented in the test plan before the start of a project. But here at Yalantis, we prefer to minimize the number of lengthy documents. Thus, our QA specialists create a test strategy document that describes:
Test environments (test devices, OS versions, etc.)
Types of testing that will be run, taking into account the specifics of the project
Criteria for the start and end of testing
Before the start of each sprint, the responsible QA specialist participates in our sprint planning meeting. At the planning meeting, we discuss the scope of the sprint and the implementation details. This helps the QA specialist determine the list of features that they need to create a checklist for, and gives the opportunity to negotiate with the team about delivery dates for builds so they can run midline tests and regression tests at the end of the sprint.
We don’t create a checklist for the entire project in one go because that’s quite a long process and because requirements may change in the future, making parts of a comprehensive checklist irrelevant. Instead, we create checklists only for the next sprint. This helps us eliminate unnecessary work and save our clients’ money. We create documentation only when and where it’s necessary.
After the sprint planning is done and tasks in the sprint scope are defined, QA specialists can start creating a checklist. But why a ‘checklist’ and not actual test cases?
Test cases consist of a title, preconditions, steps to be taken, and expected results.
A checklist serves the same purpose as a test case but is less detailed and takes less time to edit. This makes it a valuable tool for large projects.
Here’s an example of a test case:
And an example of a checklist:
This checklist was created using a special tool called TestRail. TestRail stores all information about all tests throughout the entire development process. Thus, all testing statistics for the entire period of product development are saved in one place. Statistics are available in a simple format:
A checklist is kept up-to-date throughout the product development process.
Test environment setup
When the checklist is ready and developers finish their last revisions to a feature, QA engineers need to ensure that the test environment is ready. Preparing a mobile application testing environment involves:
Preparing a test device with a certain version of the OS
Installing any necessary applications, such as Fake GPS
Logging into Facebook accounts, etc.
Our QA specialists test both frontend and backend parts of the application. You can find more information about API testing in one of our previous articles. In this article, we’ll tell you more about frontend testing.
Once developers finish a feature it moves to new feature testing.
New feature testing
During this stage, a new feature is thoroughly tested. QA specialists test:
Functionality (how it works)
Compliance with design (UI/UX)
This stage of testing is considered complete when all test cases related to the feature are passed. During this stage the feature is tested independently, meaning that interactions with other features aren’t tested. We also make sure that a new feature works correctly on various devices.
What is affected. New feature testing checks that features look and work as expected, and ensures that bugs are caught quickly.
As the project gains more and more functionality, QA specialists need to ensure that all the features work properly together and that new features don’t break any existing product functionality. To do this, we conduct regression testing.
Regression testing makes sure that changes to code (new features, bug fixes) don’t adversely affect previously implemented functionality. Each feature, as well as the interaction among features, is tested thoroughly. Code must pass all test cases to be accepted.
Regression testing is usually done before release and can be run on one or more devices. It helps to define the influence of changes to the existing product (broken functionality, number of relevant defects).
Regression testing can take a lot of time. It can also be combined with smoke testing on several devices while one device is used for all the test scenarios.
What is affected. Assessment of the current state of the product
Since regression testing is quite a time-consuming and expensive process, we recommend you do it once every three or four sprints (depending on the project size) and before the release.
This fast and superficial type of testing is performed to check that a build is stable and that core features are working. Scenarios with the highest priority are checked for every feature, but not in detail.
Smoke testing makes sure that the main features of the app are working after some scope of work has been completed (bugfixes, feature changes, refactoring, server migration), and takes only a small amount of time.
What is affected. Smoke testing is a quick check that a build is stable. But it can’t replace thorough testing since issues for alternative scenarios can be missed.
Update testing is only provided for apps already in the market. This kind of testing makes sure that existing users won’t be adversely affected by a new version of the app – that their data won't be lost, and that both old and new features will work as expected.
What is affected. Existing users and their data, as well as existing features.
QA testers also perform so-called non-functional tests that don’t touch upon an app’s specific features. We carry out the following non-functional tests:
Installation testing, which checks if an app installs and uninstalls successfully.
Compatibility testing, which examines how an app works on different operating system versions and on different devices.
Usability testing, which checks an application’s performance and the intuitiveness of the UI/UX.
Condition testing, which checks an app’s performance during low battery conditions or lack of internet connection.
Compliance checking, which ensures that an app complies with Google’s or Apple’s guidelines.
All of these types of testing can be performed with or without a checklist. The checklist is simply a framework that helps us stick to the plan and makes sure we don’t forget anything. But it’s impossible to document every check we perform; and quite simply, it’s not necessary. Thus, our testers use a checklist alongside exploratory testing, making testing quick and flexible. This allows QA engineers to focus on the real problems.
If project development takes six months or more, it makes sense to automate basic test scenarios. Typically, we automate smoke testing, regression testing, and performance testing. You can set an autotesting schedule, for example, every night or every build. This saves time and lets you focus your manual testing on more complex, custom cases.
Unfortunately, auto-tests aren’t always cost-effective, however, especially for small projects which take less than six months. Moreover, not all kinds of tests can be automated, and for others, it simply doesn’t make sense to automate them. For example, for new feature testing, it makes sense to first check your new features manually in order to quickly identify problems so our developers can fix them as soon as possible. Only after the initial manual testing of new features would we perform later autotests for those features.
Autotests are quite expensive because you need to write tests and maintain them throughout the entire development process.
It’s important to quickly provide feedback about a product’s quality and any defects identified during the mobile app testing process. So as soon as a bug is spotted, it’s registered in Jira, a bug tracking system.
All team members have access to Jira, so all bugs and their statuses are visible to all participants in the development process.
At the end of testing, statistics for the completed tests and check results can be found in TestRail.
After each sprint, we present builds to our clients so they can see our progress, assess the quality and convenience of features, and make changes to the requirements if necessary. We also provide release notes showing which features have already been completed and which we are still working on, along with a list of identified problems that will be fixed soon. We’re open with our clients about our development challenges and share all reasonable information.
Testing before release
We carefully check our products before sending them to market. As a rule, we perform regression testing and exploratory testing. Last but not least, we bring in another QA specialist who hasn’t been working on the project for a fresh pair of eyes right before the final release.
If you have questions about our testing processes or Quality Assurance, send an email to firstname.lastname@example.org.