Cache of more than the latest Android image

Description of the feature request

Currently, you cache the latest Android SDK image, so that we don’t have to pull the image on every build, case in which the build times increase for more than 100% (even getting some build timeouts from time to time). So my feature request is for you to cache more than one Android SDK image, specially those where significant breaking changes occured (e.g. NDK r15 -> r16)

Use case / for what or how I would use it

In my project I’m using OpenCV (widely-used native computer vision library), which is currently built by their dev team using NDK r10 and worked fine until NDK r15 (previous Bitrise image). However, it currently doesn’t work with NDK r16 (issue here, causing my builds to fail. The solution was to choose your previous image on the dashboard settings, which causes every build to take 20+ minutes due to image pull

Hey @onfido-mobile-ci,

Thanks for the #feature-request! :rocket:

We’re already thinking about this but unfortunately it’s not straightforward because the NDKs need a ton of space. Practically this would mean we have to bump up the image sizes we use (which are already at 100G each) which translates to more storage space we have to utilize.

We’ll think about it and again, thanks for the request!

Gabor from Bitrise

1 Like

I’d personally prefer a solution with a new Stack option, similar to the existing “LTS” one, e.g. call it “previous NDK”.

That way we could keep that stack up to date as well, instead of just “freezing” it. That stack would be updated the exact same way as the “latest NDK” one, built on the same Android & Base docker images / layers, it would simply have an older NDK preinstalled.

1 Like

Or at least build both the “lastest” and “previous” NDK images/layers on top of the latest Base & Android, so that if both are cached on the VM the base & android layers are not duplicated, only cached once.

1 Like

Same here, although we use an even older NDK (13b).

I tried to push it into the cache via cache:push, but that did not like receiving 2+ Gb of data (and errored out, maybe for the best).

THanks for the comment & details @jasperroel - please don’t forget to vote on this #feature-request to increase its priority :wink: