Step Six : Iterative Improvement & Proactive Assurance

By now, we’ve audited and documented the existing project, carried out a lightning release and set up our technical environment creating a more manageable project in the long term.

Now it is time to put the policies and tools in place to act on that long term plan. This final stage is about responding as quickly as possible to future mistakes and putting yourself in a position to identify and prioritise current issues.

Crashlytics are a vital tool for mobile application management. This involves setting up the app to report errors and warnings from users’ devices directly to your analytics dashboard, or even day to day communication tools such as Slack. This means that common errors can be reported and spotted quickly, getting real time data from all of your users, helping you to identify trends and gain complete oversight on your app’s performance. These reports come with diagnostics identifying the area of code concerned with the error, giving the development team fantastic insights into weaknesses in the coe base.

User feedback and reviews
Incorporating a system to gather customer feedback and process it properly is a key step in improving any mobile application. The comments and feedback can give invaluable insights that you would otherwise miss, and you can even append key information onto their emails or tickets that they may not be able to provide you with themselves, such as the OS version, App version, internet status, device model and platform etc.
Prompting reviews from users with positive feedback is also great for your ASO efforts as higher rated apps will rank higher in their relevant search terms.

Phased roll out
When it comes to releasing any new build, it is important to consider the time frame and location of each release to minimise the risk of a bug going live. Rolling out releases over several days gives you a chance to monitor the analytics, user reports and crashlytics for that release on a subset of users before you go global.
The key metrics to consider monitoring post-release are as follows:

  • Retention rates
  • Average Session Length
  • Average Revenue per Daily Active User (ARPDAU)
  • Key event numbers per user

If those 4 things are holding steady following the start of a rollout, then it is a fairly safe bet that there are no major issues on your build and you can continue to roll out. Also, bear in mind that it is possible to view all of the metrics split out by version number, so you can compare your new version with the old one and spot any downturns.

Multi-variant testing
Multi-variant testing (or A/B Testing) is nothing new, but is often thought of in the Conversion Rate Optimisation space: improving ad performance or iterative designs etc. However, with live applications, we would always advise adding new features into an A/B test to monitor the new features performance against a control group on the same release. This gives two key benefits:
Firstly, it lets you know the exact impact the feature is having on a relevant control group instead of comparing to people running older versions of the app
Secondly, should something go wrong in a new feature, you can deactivate it completely for all users. This goes for any feature meaning you can hit the brakes without the necessity to go through an entire new release cycle.

Proactive Quality Assurance
Digital projects are complex. It can’t be avoided, and that’s why every modern IT project will have a Quality Assurance team behind it ensuring that new features are working and that a release is given the greenlight before it goes live. However, the work doesn’t stop there with mobile applications. Modern day mobile apps are being run on a host of ever changing devices, with regular software updates, and within a world of changing legal and regulatory requirements.

Proactive QA involves testing every release of an app across a range of devices, covering the use case of your end users. This includes testing on new devices which may come with new hardware challenges; remember when the infamous notch was introduced onto iPhones a few years back? How many applications had vital UI elements behind that thing?

This also involves testing applications on the latest Operating System updates to ensure that an application isn’t affected by the shifting sands of software updates. Each new OS update should prompt a wave of testing across the suite of devices to ensure that the latest app version is unaffected and secure.

Finally, monitoring the regulatory and legal requirements of an application is a must. Many platforms are under pressure to ensure that any and all applications that use their services are compliant. This results in a long list of compliance forms and questionnaires that are often issued with strict deadlines attached. Fail to complete these on time, and you can expect your application to be pulled, causing major technical issues for your users, or complete app failure.


A lot of the above should be viewed as crash mats underneath safety nets and harnesses. They are about securing your app in as many ways as possible from negative impacts of new features and about increasing visibility on the app’s performance from as many aspects as possible. Lastly it is about providing qualitative data on the app’s performance to help you make more informed decisions about the future of the project and where its strengths and weaknesses are at that exact moment in time.