Scaling Mobile App Performance: How We Cut Screen Load Time From 8s to 2s
We cut our mobile app’s screen load time from 8 seconds to 2 seconds by optimizing network requests, simplifying the UI, and improving image loading.
Join the DZone community and get the full member experience.
Join For FreeUser experience is king in the crowd-sourced world of mobile app development, especially in the case of speed. If your app takes too long to load, those same users will bypass your app in favour of a speedy, seamless offering.
When we realized that our app was taking a frustrating 8 seconds to load some screens, this was a hard reality for us to accept. With the competition being fierce and users becoming more and more impatient, we knew something had to change. Read this article as we detail our path from an 8-second screen load time to under 2 seconds, all with a combined multi-sided attack on mobile optimization.
The Problem: Understanding the 8-Second Delay
Upon evaluating our app’s performance, we noticed that the 8-second load time was a deal breaker. At first glance, the problem seemed simple: the app was rendering too slowly. However, when we started digging deeper, we soon learned that this was not a single problem. It was caused by inefficient network requests, a complex UI structure, and unoptimized images. It was not just one orientation of the app; it was a systemic problem, and a holistic solution was required.
According to research, users expect apps to load within 3 seconds at most, and with every extra second delay, user retention decreases. To keep users engaged and satisfied, the goal was to reduce the screen load time to under 2 seconds. However, achieving that would not be a quick fix; it would likely require solving the performance bottleneck of every single function in the app.
Root Cause: Beyond Just Network Delays
At first, we figured the slow network was the main culprit. After all, many of the app’s functions depended on data from a remote server, and network speed could differ wildly from environment to environment. Nevertheless, after basic network optimization (caching frequently used data), we found that the 8-second delay did not change much.
It wasn’t only about the network — the real issue was how the app interacted with it. Upon closer inspection, we discovered that the app was making heavy GraphQL requests that spanned multiple services. These requests were executed synchronously on each screen load, delaying the rendering of the user interface until all data was retrieved. This bottleneck triggered a chain reaction, slowing down the entire user experience. To address this, we focused on optimizing GraphQL queries to include only the necessary data upfront, deferring non-critical data fetches to improve load times and responsiveness.
Optimizing the User Interface for Speed
The next thing was working on the app’s UI design. Highly complex or nested mobile UIs can turn into performance bottlenecks. Unfortunately, our app was the perfect example of this. A complex UI structure comprised of nested views resulted in slower rendering time, particularly on lower-powered devices. The more nested it was, the more time it took to process and display the content.
We also found some UI components loaded by default, even when not visible to the user at screen load. This was a great waste of resources. Taking a hard look at our UI, we restructured it to be simpler and load components only as necessary.
Reducing UI Complexity: Key Considerations
- Lazy loading: Instead of loading all components upfront, we loaded only the necessary elements first. Non-visible components were loaded in the background or as the user interacted with the screen.
- Asset prefetching for faster performance: We also implemented asset prefetching to ensure that critical resources like CSS files and JavaScript packages were fetched ahead of time. By preloading essential assets, we minimized delays during user interaction and ensured that all necessary files were ready when needed.
- View flattening: We reduced the depth of view hierarchies, avoiding deep nested layouts that were costly to render.
- Optimized animations: Complex animations were simplified or removed, cutting down the rendering time required for UI transitions.
By stripping back the UI complexity, we were able to shave off precious milliseconds from each screen load.
Image Optimization: Smart Loading for Faster Performance
Large image files were one of the biggest contributors to our app’s slow load times. Images are always important in user experience, but they are also extremely heavy elements to load on mobile devices. The images in our app, most of them high resolution or even uncompressed, looked great, but they were slowing down our app pretty significantly.
The Image Optimization Strategy
The first thing we did was to change to more efficient image formats like WebP, which have smaller file sizes without degradation of the visual quality. Optimizing images wasn’t only about compressing them, it was also knowing when and how to load them.
To do this, we did lazy loading. We did not load all images on the screen at once, but instead we loaded first the images inside the visible viewport only. It significantly improved the time a user would have to wait to see the first content on the screen. The rest of the images were loaded in the background or when the user scrolled without disturbing the user experience.
We also rescaled the images so that they were served at proper image resolutions for the devices’ screens. That prevented the app from wasting resources by downloading unnecessarily high-resolution images on smaller devices.
Continuous Monitoring: A Vital Step to Ongoing Improvement
After implementing the changes, we didn’t stop there. Performance optimization isn’t a one-time task — it’s an ongoing process. To ensure that our improvements were effective and sustainable, we implemented continuous performance monitoring tools.
Profiling Tools for Continuous Performance Tracking
Firebase Performance Monitoring, Android Profiler, and Xcode Instruments started being invaluable tools. These tools allowed us to instrument real-time performance data like network response times, CPU usages, and memory consumption. This was important to identify new bottlenecks introduced by adding new features.
In addition, we drew on user feedback from beta testers to be sure that our changes also reflected useful improvements. Continuous ingestion of performance data and real-time analysis was a fast, responsive way to improve our app experience and fight any new performance troubles before they were noticeable to the users.
The Results: A 75% Improvement in Load Time
Our efforts culminated in a remarkable result. It was possible to reduce the initial screen load time by 75% from 8 seconds to only 2 seconds. This not only improved the overall user experience and retention rates, but also built very positive user feedback. The app was snappier, more reactive, and ultimately more fun to use. We made substantial changes, but they were scalable as well.
Conclusion
The problem of mobile app performance is ongoing. Adding new features, making new API calls, or loading new images can all end up slowing down your app. We, as developers, should always optimize and measure.
The point was not to make the app faster; it was to improve the experience for our users. Our journey went from 8-second load times to 2. That is a goal every app developer should aim for.
Performance optimization in the end isn’t a one-time fix. It’s about commitment to constant iteration, testing, and fine-tuning. Seconds count, and with the right approach, those seconds can make all the difference.
Opinions expressed by DZone contributors are their own.
Comments