Hello. My application is written in PHP, and uses two separate deployments – a www-web
deployment, running an nginx
container to serve static files, and a www-php
deployment, running php-fpm
for serving dynamic content. In my existing Okteto manifest, I have the www-web
and www-php
services defined as two separate entries in the dev
section:
# ...
dev:
www-web:
command: [ "/docker-entrypoint.sh", "nginx", "-g", "daemon off;" ]
sync:
- "src/extranet/web/src/wwwroot:/app/wwwroot"
- "src/php-web-library:/app/php-web-library"
www-php:
command: [ "php-fpm" ]
reverse:
- 9003:9003
sync:
- "src/extranet/web/src/wwwroot:/app/wwwroot"
- "src/php-web-library:/app/php-web-library"
volumes:
- /root/.composer/cache#
# ...
When I want to debug the application, I open two terminals and run okteto up www-web
and okteto up www-php
in parallel. This works fine.
While browsing the documentation for the Okteto manifest, I just came across the services
sub-entry, which allows running additional services alongside the one being developed. This seems perfect for my use case, since I need to sync my local files to both pods, but I only need to establish tunnels to the www-php
pod.
I’ve tested rewriting my manifest to make use of this feature:
# ...
dev:
www-php:
command: [ "php-fpm" ]
reverse:
- 9003:9003
sync:
- "src/extranet/web/src/wwwroot:/app/wwwroot"
- "src/php-web-library:/app/php-web-library"
volumes:
- /root/.composer/cache#
services:
- name: www-web
sync:
- "src/extranet/web/src/wwwroot:/app/wwwroot"
- "src/php-web-library:/app/php-web-library"
# ...
When I run okteto up www-php
, it seems to succeed. However, the www-web-okteto
pod remains in the pending
state and is never scheduled. When I run kubectl describe pod
on it, the “Events” section contains the following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m47s (x3 over 5m51s) default-scheduler 0/6 nodes are available: 1 Too many pods, 1 node(s) didn't match pod affinity rules, 4 node(s) didn't find available persistent volumes to bind. preemption: 0/6 nodes are available: 1 No preemption victims found for incoming pod, 5 Preemption is not helpful for scheduling..
Warning FailedScheduling 11s (x5 over 5m42s) default-scheduler 0/6 nodes are available: 1 Too many pods, 2 node(s) didn't match pod affinity rules, 3 node(s) had volume node affinity conflict. preemption: 0/6 nodes are available: 1 No preemption victims found for incoming pod, 5 Preemption is not helpful for scheduling..
Examining the PVC for the development volume, I notice that the Access Mode
is set to ReadWriteOnce
, but it’s bound to both pods:
Name: www-okteto
Namespace: dmccorma
StorageClass: gp3
Status: Bound
Volume: pvc-4dc2a66b-b73a-4b57-a1b1-c962e02f191b
Labels: dev.okteto.com=true
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
volume.kubernetes.io/selected-node: ip-10-60-2-184.ec2.internal
volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: www-php-okteto-59466d4f6-l4tf6
www-web-okteto-57bb757b5b-xffx9
My theory is that the second pod is unable to bind the PVC, and so it’s unable to be scheduled. Is the expectation that the development volume should be configured with a StorageClass that supports ReadWriteMany
? Or am I barking up the wrong tree? Thanks.