As shown here, I have inserted those steps in my workflow but still the cocoapods spec repo gets cloned every time. About 12.5 minutes of the 15 minute build is the repo cloning.
This might be a clue. This is from the log…
Downloading remote cache archive
Failed to get cache download url: build cache not found: probably cache not initialised yet (first cache push initialises the cache), nothing to worry about
WARN[04:02:42] Step (cache-pull) failed, but was marked as skippable
Hi! That error happens when the Cache Push step hasn’t run for a particular branch / workflow combination. Once the cache push step successfully completes once the cached data will be used on subsequent builds.
I looked in your account and you have no cached data stored. This tells me that it’s either been recently cleared or the cache push step hasn’t run on any of your builds.
Thanks @matthew.jones. Ok, that makes sense. So the reason cache push isn’t happening is because the iOS Auto Provision step is failing (you responded to my question about that too). It’s time-consuming to debug because 12.5 minutes of the 15 minute build is consumed by the spec repo clone.
In the meantime, could I temporarily put the cache push step right after the cocopods install step, with a run_if: false in all the other steps?
Then I could debug that workflow a bit faster. Lemme know what you think of that.
It didn’t work. I have an error free cache pull, followed by and error-free cocoapods install (10.4 minutes that time), followed by a error-free cache push. Still, subsequent builds reclone the spec repo.
Dang. I have indeed been working on the wrong app! I apologize for the churn on this and thank you for your efforts. I’m going to mark this as resolved. If it ends up that there is an issue with this app for the spec repo I’ll open a new ticket (and hopefully have all the facts straight).
I have exactly the same issue! Cocoapods (we use the official Bitrise step) clones the repo each time and I recognized (using a simple script workflow) that before Cocoapods the folder ~/.cocoapods/repos is empty. In the past the cocoapods step took only 30sec due to caching but now, due to the clone it takes 7min. We use the Xcode 12.3 stack from Bitrise. I’m also sure that I have the project for the correct app since we have only one Did you guys take out the pre-installed pecs repo from the VM image or do I have issues in my Podfile?
Cocoapods Environment on CI Machine
Stack
CocoaPods : 1.10.0
Ruby : ruby 2.6.5p114 (2019-10-01 revision 67812) [x86_64-darwin19]
RubyGems : 3.0.3
Host : Mac OS X 10.15.7 (19H2)
Xcode : 12.3 (12C33)
Git : git version 2.29.2
Ruby lib dir : /Users/vagrant/.rbenv/versions/2.6.5/lib
Repositories : cocoapods - git - https://github.com/CocoaPods/Specs.git @ 532b5f19667df182823a8e940eca697f5e50637e
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '12.0'
use_frameworks!
# ignore all warnings from all pods
inhibit_all_warnings!
# There are no targets called "App" in any Xcode projects
abstract_target 'App' do
pod 'IQKeyboardManagerSwift'
pod 'Moya'
pod 'SwiftKeychainWrapper'
pod 'Polyline'
pod 'SwiftEntryKit'
target 'App1' do end
target 'App2' do end
target 'Speisewagen_App1' do end
target 'Speisewagen_App2' do end
# There are no targets called "UnitTests" in any Xcode projects
abstract_target 'Tests' do
pod 'SwiftyJSON'
pod 'Moya'
target 'App1Tests' do end
target 'App2Tests' do end
end
end
post_install do |options|
options.pods_project.build_configurations.each do |config|
config.build_settings['CLANG_ANALYZER_LOCALIZABILITY_NONLOCALIZED'] = 'YES'
end
options.pods_project.targets.each do |target|
target.build_configurations.each do |config|
swift_blacklist = []
product_name = config.build_settings['PRODUCT_NAME']
if config.name == "Release"
config.build_settings['SWIFT_OPTIMIZATION_LEVEL'] = '-Owholemodule'
else
config.build_settings['SWIFT_OPTIMIZATION_LEVEL'] = '-Onone'
end
# https://www.jessesquires.com/blog/2020/07/20/xcode-12-drops-support-for-ios-8-fix-for-cocoapods/
config.build_settings.delete 'IPHONEOS_DEPLOYMENT_TARGET'
end
end
end
This is an issue we’re investigating. I don’t have a lot of additional information to share at the moment. There’s a few other customers who are affected as well.