When you say but the error still persists. I guess you mean the pending task in the admin UI, right? Just to make sure you are not seeing any error in other place.
That task mean to use persistency to store the BuildKit cache instead of doing it in the pod. So, if persistence is not enabled, any restart in BuildKit pods would mean that the cache is lost. On the other hand, if you enable the persistence flag, the cache is stored in a PVC (persistent volume claim) so if any BuildKit pod restarts, you don’t lose the cache.
With the configuration you are mentioning, it should enable the persistence and the pending task should be marked as completed. In order to troubleshoot the issue, could you check the PVCs in your okteto namespace? You can do that with kubectl get pvc and looking for a PVC called storage-<helm-release-name>-buildkit-xxx or you can use directly kubectl get pvc | grep storage.
If the PVC doesn’t exist, it would indicate that the setting you are specifying is not being picked up correctly. If the PVC exists, it means that BuildKit is actually using persistency but there is an error reporting it in the UI.
Could you verify it when you have a chance and let us know?
Ok, yeah, setting the key buildkit.persistence.enabled to true in the config.yaml and upgrading the helm chart should create the PVC and attach it to BuildKit.
You can check the PVC as mentioned before to see if it works as expected. If that works, and the PVC is correctly created but you still have the pending task in the UI, could you try rolling out the api pods (kubectl rollout restart deployment <helm-release-name>-okteto-api)?