We’re seeing this pending task in our admin UI:
“Improve build performance by persisting the Buildkit cache layers.”
I’ve looked through the linked documentation, and have tried setting it to this:
but the error still persists. The buildkit layers do seem to be caching a bit better though, with the new okteto.global config…
Hi @benjoldersma ,
When you say
but the error still persists. I guess you mean the pending task in the admin UI, right? Just to make sure you are not seeing any error in other place.
That task mean to use persistency to store the BuildKit cache instead of doing it in the pod. So, if
persistence is not enabled, any restart in BuildKit pods would mean that the cache is lost. On the other hand, if you enable the
persistence flag, the cache is stored in a PVC (persistent volume claim) so if any BuildKit pod restarts, you don’t lose the cache.
With the configuration you are mentioning, it should enable the persistence and the pending task should be marked as completed. In order to troubleshoot the issue, could you check the PVCs in your okteto namespace? You can do that with
kubectl get pvc and looking for a PVC called
storage-<helm-release-name>-buildkit-xxx or you can use directly
kubectl get pvc | grep storage.
If the PVC doesn’t exist, it would indicate that the setting you are specifying is not being picked up correctly. If the PVC exists, it means that BuildKit is actually using persistency but there is an error reporting it in the UI.
Could you verify it when you have a chance and let us know?
correct - just in the UI.
no PVCs reporting from kubectl when grepping for
I’m setting the buildkit settings in my config.yaml and then upgrading the helm chart - is that correct?
Ok, yeah, setting the key
true in the config.yaml and upgrading the helm chart should create the PVC and attach it to BuildKit.
You can check the PVC as mentioned before to see if it works as expected. If that works, and the PVC is correctly created but you still have the pending task in the UI, could you try rolling out the api pods (
kubectl rollout restart deployment <helm-release-name>-okteto-api)?
Let us know if you have any issue with it
oops - I was forgetting to pass in the
-f config.yml on my helm upgrade cmd. that explains it. btw - this video helped me figure out my mistake:
oh, yeah! I forgot to share it with you, sorry. Glad to hear it’s working now!