Testing and Quality Assurance at Yalantis

Given the cyclic nature of iterative development, quality assurance (QA) specialists perform a vital role in the successful completion of builds. First, QA specialists clarify requirements, including supported devices and supported operating systems. Next, QA specialists test builds as they become ready. Finally, QA specialists suggest UX improvements throughout the entire product development lifecycle.

This piece will help you understand how the entire testing process is done at Yalantis. 

Steps of the quality assurance process

At Yalantis, testing is divided into the following sequential stages: 

Testing process at Yalantis. Stages

[Our testing process consists of several steps]

Let’s explore what activities and deliverables each stage involves. 

Requirements review

We engage our QA experts long before the start of development. They help us identify problems with a product’s business logic, eliminating some potential issues even before we develop project documentation. This also reduces development costs.

Additionally, our QA specialists help define and analyze your application’s features. And at the requirements review stage, QA specialists are already thinking about how each feature will be tested.

Once the specification is ready, our designers create wireframes. At this stage, QA specialists ensure that the wireframes display all business logic described in the specification. They also check that everything is in compliance with Google’s and/or Apple’s guidelines.

Test planning

Traditionally, all quality assurance and testing activities are documented in the test plan before the start of a project. But here at Yalantis, we prefer to minimize the number of lengthy documents. Our QA specialists create a test strategy document that describes:

  • test environments (test devices, operating system versions, etc.)

  • types of testing that will be run, taking into account the specifics of the project

  • criteria for the start and end of testing.

Before the start of each sprint, the responsible QA specialist participates in our sprint planning meeting. At the planning meeting, we discuss the scope of the sprint and the implementation details. This helps the QA specialist determine the list of features they’ll need to create a checklist at the test design stage and gives the opportunity to negotiate with the team about delivery dates for builds so midline tests and regression tests can be run at the end of the sprint.

We don’t create a checklist for the entire project in one go because that’s quite a long process and because requirements may change, making parts of a comprehensive checklist irrelevant. Instead, we create checklists on a sprint by sprint basis. This helps us eliminate unnecessary work and save our clients’ money. We create documentation only when and where it’s necessary.

Test design

After sprint planning is done and tasks in the sprint scope have been defined, QA specialists can start creating either a checklist or test cases. 

Defining test cases is standard practice. It’s developed for a particular test scenario to verify that an app works as it’s supposed to. Test cases consist of a title, preconditions, steps to be taken, and expected results.

Here’s an example of a test case:

Test case

[Test case]

Test cases are effective for complicated test scenarios, but on the other hand, creating test cases is extremely time consuming. To speed up the development process and cut your costs, we use test checklists instead of test cases to test user flows. 

A checklist serves the same purpose as a test case but is less detailed and takes less time to edit. We use them for basic user flows. For instance, it’s quite impractical to create test cases for standard user registration, which usually involves several steps. However, in the course of our experience testing apps, we’ve learned how to use test checklists even for complicated test scenarios. This make software testing clear and effective. 

An example of a checklist:

Example of testing checklist

[Testing checklist]

This checklist was created using a tool called TestRail, which stores all information about all tests throughout the entire development process. All testing statistics for the entire period of product development are saved in one place. Statistics are available in a simple format:

Statistics

[Testing statistics]

A checklist is kept up-to-date throughout product development.

Test environment setup

When the checklist or test case is ready and developers have finished their last revisions to a feature, QA specialists need to ensure that the test environment is ready. Preparing a mobile application testing environment involves:

  • preparing a test device with a certain version of the operating system

  • installing any necessary applications, such as Fake GPS, iTool, Charles Proxy, etc.

  • logging in to Facebook and other accounts.

Test execution

We apply both manual and automated testing to all applications we build to make sure the code for mobile and web apps, as well as server-side software, is of the highest quality. Our QA specialists test the frontend, backend, and APIs of your application. You can find more information about API testing in one of our previous articles. In this article, we’ll tell you more about frontend testing and also discuss what types of testing we use for our projects. 

Test reporting

It’s important to quickly provide feedback about a product’s quality and any defects identified during the mobile app testing process. So as soon as a bug is spotted, we register it in Jira, a bug tracking system.

Bug registered in Jira

[Bug registered in Jira]

All team members have access to Jira, so all bugs and their statuses are visible to all participants in the development process. At the end of testing, statistics for the completed tests and check results can be found in TestRail.

When we run automated tests, a report is automatically created and sent to our QA specialists.

After each sprint, we present builds to our clients so they can see our progress, assess the quality and convenience of features, and make changes to the requirements if necessary. We also provide release notes showing which features have already been completed and which we’re still working on, along with a list of identified problems that will be fixed soon. We’re open with our clients about our development challenges and share all reasonable information.

Types of testing we use

New feature testing

During this stage, a feature is thoroughly tested. QA specialists test:

  • the functionality of the app, or how it works

  • the app’s appearance

  • compliance with UI/UX design guidelines.

This stage of testing is considered complete when all test cases related to the feature have been passed. During this stage, the feature is tested independently, meaning that interactions with other features aren’t tested. We also make sure that a feature works correctly on various devices.

What is affected? New feature testing checks that features look and work as expected, and ensures that bugs are caught quickly.

As a project gains more functionality, QA specialists need to ensure that all the features work properly together and that new features don’t break existing functionality. To do this, we conduct regression testing.

Regression testing

Regression testing makes sure that changes to code (new features, bug fixes) don’t adversely affect previously implemented functionality. Each feature, as well as the interactions among features, is tested thoroughly. Code must pass all test cases to be accepted.

Regression testing is usually done before release and can be run on one or more devices. It helps to define the influence of changes to the existing product, identifying any broken functionality and relevant defects.

Regression testing can take a lot of time. It can also be combined with smoke testing on several devices while one device is used for all test scenarios.

What is affected? Regression testing checks the current state of the product.

Since regression testing is quite time-consuming and expensive, we recommend doing it once every three or four sprints (depending on the project’s size) and before release.

Smoke testing

This fast and superficial type of testing is performed to check that a build is stable and that core features are working. Scenarios with the highest priority are checked feature by feature, but not in detail.

Smoke testing makes sure that the main features of the app are working after some scope of work has been completed (bug fixes, feature changes, refactoring, server migration), and takes only a short amount of time.

What is affected? Smoke testing is a quick check that a build is stable. But it can’t replace thorough testing, since issues for alternative scenarios can be missed.

Update testing

Update testing is only provided for apps already on the market. This kind of testing makes sure that existing users won’t be adversely affected by a new version of the app – that their data won't be lost and that both old and new features will work as expected.

Non-functional testing

QA testers also perform so-called non-functional tests that don’t touch upon an app’s specific features. We carry out the following types of non-functional testing:

  • Compatibility testing, which examines how an app works on different operating system versions and on different devices.

  • Condition testing, which checks an app’s performance during low battery conditions or lack of internet connection.

  • Compliance checking, which ensures that an app complies with Google’s and/or Apple’s guidelines.  

  • Installation testing, which checks if an app installs and uninstalls successfully.

  • Interruption testing, which shows how an app reacts to an interruption and checks if it’s able to correctly resume its work. By interruptions, we mean network connection loss, phone calls, reminders, and so forth. 

  • Localization testing, which examines that no new bugs have appeared after a product’s content has been translated into another language. 

  • Migration testing, which ensures that everything works seamlessly after adding new features or changing the technology stack of an already deployed app.

  • Usability testing, which checks an application’s performance and the intuitiveness of the UI/UX.

All of these types of testing can be performed with or without a checklist. The checklist is simply a framework that helps us stick to the plan and makes sure we don’t forget anything. But it’s impossible to document every check we perform; and quite simply, it’s not necessary. Our testers use a checklist alongside exploratory testing, making testing quick and flexible. This allows our QA specialists to focus on the real problems.

Testing before release

We carefully check our products before sending them to market. As a rule, we perform regression testing and exploratory testing. Last but not least, we bring in another QA specialist who hasn’t been working on the project for a fresh pair of eyes right before the final release.

Test automation

We also automate the main user flows for long-term projects, which are often regularly exposed to changes and extension of functionality. 

For instance, with manual testing only, we need to constantly take the same steps to ensure that new features and changes in the code haven’t affected other parts of the app. Instead of repeatedly testing everything manually, we create automated tests that check everything automatically. 

Typically, our manual QA specialists perform tests and create a detailed test case. After that, they deliver their report to the automation QA specialist. On the basis of the information received, the automation QA creates automated tests and sets the time and frequency with which they will be run. 

In most cases, we automate smoke testing, regression testing, and performance testing. This saves time and lets us focus our manual testing on more complex, custom cases. We recommend that you automate tests if they need to be run every build or release or if it takes a lot of time to input large volumes of data. 

For instance, we may have a lot of input fields in an app. Instead of manually entering different values in all these fields, we create a test that does it automatically and sends us the report. 

Unfortunately, autotests aren’t always cost-effective, especially for small projects. Moreover, not all kinds of tests can be automated, and for others, it simply doesn’t make sense to automate them. 

For example, a usability test cannot be automated. It’s also ineffective to automate tests that will run only once and tests that need to be run as soon as possible. 

Autotests should be written and maintained throughout the entire development process, just like any other code. This can make automated tests quite expensive. Ironically, automated tests can also contain bugs and crash. 

So before creating autotests, we carefully weigh the pros and cons to decide whether we need them for your project. 

If you have any questions about our testing or quality assurance processes, you can always drop us a line to get help.

4.3/ 5.0
Article rating
18
Reviews
Remember those Facebook reactions? Well, we aren't Facebook but we love reactions too. They can give us valuable insights on how to improve what we're doing. Would you tell us how you feel about this article?
Ready to work with us?

We have everything you might ever need to create a mobile solution that can attack the market and win.

Contact us

We use cookies to personalize our service and to improve your experience on the website and its subdomains. We also use this information for analytics.

More info