Sometimes we want to debug issues related to running a deployment in large scale (more than 30 replicas), and attaching to one of them can be really useful.
Is it possible to prevent Okteto from scaling the original deployment to 0?
It also seems like Okteto is listening to modification events there and override my upscale every time I try to scale it…
@pchico83 In the demo we had this week, you mentioned that it is possible with a configuration in the Okteto manifest. Can you please shed some light on that?
When you execute okteto up, we always scale the original deployment to 0. One thing you can do is to use the services key in the manifest Okteto Manifest | Okteto Documentation.
All the services that you specify in that section will work just like the development container (code is synchronized, you can specify the command to run, etc…), with two exceptions: they won’t be able to start an interactive session and the number of replicas is not modified.
There is one thing to bear in mind, you would need to define a main dev container. In your case, you would need to define a main dev container using autocreate: true apart of defining the services section. Autocreate flag will create the dev container instead of replacing an existing one.
May I ask which is the use case of attaching just to 1 running pod? Maybe adding some tools to that specific container for debugging or similar? Trying to understand the use case to guide you with the best possible solution.
The use case is that we are running automations to stream data into the cluster. And to handle that we need multiple replicas of the deployment.
We wand to enable debugging on one of the replicas, but in order to make the automation keep working, we need to have the rest still running.
So you are suggesting to create a separate deployment not related to the original one and debug this?
If that’s the case, how would automate these tasks:
Image and tag - right now we are using the deployed image to debug, in that case we would also somehow need get the image of the deployment and apply it to the new one
Labels = since the deployment Okteto creates is a copy of the original, the labels are copied as well. How will we achieve that in the new flow?
If you can share a working proof of concept that would be great
So you are suggesting to create a separate deployment not related to the original one and debug this?
No exactly, the separated deployment would need to be created because the way of okteto up works.
The solution I was proposing would modify the original deployment and would affect to all the replicas, so it doesn’t probably fits with what you want.
Do you need to have the code in that pod synchronized or just the debugger?
dev section contains a single dev container definition called dev (I know, I’m bad at choosing names ). As you can see, it has the autocreate flag to true. And within it, it has a section called services. It is a list specifying the other services to set in dev mode. For each service, you need to specify the name of the service to be replaced and you can specify the image to use (if you don’t specify it, it will take the original one) and a command to be executed when the container runs. In my example, it directly executed the main.go file, but you could start a debugger for example.
The dev container in your case probably won’t be used, but you need to create it as it is needed a main dev container.
You need to bear in mind these things:
Even if services key keeps the original number of replicas, we do modify the original deployment. We scale it to zero and create a new deployment with the original number of replicas, so there will be a small amount of time in which you’re service might not respond to requests
The command you specify for each service in services will be executed on each replica of the deployment, so If you run a debug command, all the pods will be running on debug
The folders specified in the sync section will be synchronized on each replica
We currently have a bug in this scenarios in which the okteto down command doesn’t restore the original number of replicas. We are currently working on a fix for it.
Given the first 2 points, not sure it’s exactly what you are looking for, but it is the way to achieve something similar to what you want using Okteto.