Question on okteto and syncthing relaying

Our security team has raised concerns about the potential use of relaying during file synchronization with Okteto. Despite Syncthing relaying being disabled by default in Okteto, as verified through the command:

okteto status --file <okteto_file> --info

We have observed instances where Syncthing appears to utilize a relay. Here’s a log event that shows this:

INFO: Joined relay relay://192.99.59.139:443" process=syncthing

Could someone clarify if this behavior is expected? Additionally, is there a way to configure Syncthing via Okteto to ensure relaying is never used?

Thanks,
Julio

Hi @jplasencia,

Thanks for reaching out to us about your concerns regarding Syncthing relaying. We’ve reviewed our records and would like to clarify a few points.

Firstly, which version of Okteto did you observe those logs? This information will help us better understand the context.

Regarding Syncthing relaying, it’s important to note that this feature is end-to-end encrypted, meaning only the device ID, client IPs, and bandwidth are known to the relay operator. According to Syncthing’s documentation (Relaying — Syncthing documentation), the connection between peers is always direct unless there’s some instability or unreachability, in which case the relay would kick in.

In the case of Okteto, we establish a direct connection with Syncthing using a K8s port-forward tunnel. This connection goes from your user to the pod via the kube-apiserver and kubelet on the node, and it’s encrypted in transit by Kubernetes TLS certificates on each leg step. As such, there’s no need for relaying. We’ve checked our records and don’t have any instances of Syncthing managed by Okteto using relaying.

Lastly, the IP address you mentioned (192.99.59.139) is listed in the global directory of Syncthing, which contains a list of community-managed instances: https://relays.syncthing.net/.

We will escalate your request to disable relaying by default and consider implementing this feature for future releases with our product team’s input.

Thanks for your understanding, and please let us know if you have any further questions or concerns!

Best regards,
-Javier.

HI @provecho,

First, thank you for the quick and thorough response. On my machine, I have okteto version 2.25.4. As shown in the previous screenshot, relaying does seem to be disabled by default, but in spite of this, there seem to be some cases where Syncthing still attempts to use a relay.

I went to back to the logs to see if there was more information and I did find that this attempt comes after the message:

INFO: Failed to exchange Hello messages with *********************** at 127.0.0.1

This leads me to believe that when there’s a failed connection attempt using the K8s port-forward tunnel, Syncthing attempts to use a relay.

While I understand the role of a relay in Syncthing and also understand that limited information is sent to the relay operator, having our pods try to make a connection to external IPs, often in other countries, triggers security alerts in our environment. I would prefer if we could permanently prevent the use of relaying and, instead, explore other options to address any connection issues.

Regards,
Julio

Hi @jplasencia,

Thanks for the detailed information and additional context.

A few extra questions to help clarify things:

  • Are there any firewall rules that disallow ingress traffic to the ports bound by the Okteto CLI? I assume not, given you were able to access the Syncthing web UI earlier.

  • Can you perform standard kubectl port-forward commands against your Kubernetes cluster?

  • Are there any error logs in the dev pod that we should be looking at?

Thanks for providing the Okteto version. We’ll definitely investigate this and see if it’s related to a change in Syncthing defaults. We understand your concern. I’ll follow up on this directly with our engineering team.

Best regards,
-Javier.

Hi @provecho,

To answer your questions:

  • There are no firewall rules as far as I’m aware
  • Yes, I am able to perform a port forward using kubectl port-forward. With that said, I should mention that in our environment our authentication token to K8s does expire after 4 hours. Could it be that Syncthing still tries to make a connection after the token expires and then switches to Relay after connection fails? This is what we would like to prevent.
  • Here are some of the error messages I can see from syncthing. There a lot of log events, so it’s difficult to pinpoint specific events unless I can search for keywords. But here’s what I found:
time="2024-05-15T16:45:16Z" level=info msg="2024/05/15 16:45:16 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details." process=syncthing
time="2024-05-15T16:46:35Z" level=error msg="process exited with error status -1" error="signal: broken pipe" process=syncthing
time="2024-05-15T16:46:35Z" level=info msg="[ATOPH] 2024/05/15 16:46:35 INFO: \"1\" (okteto-1): Failed to sync 229 items" process=syncthing
time="2024-05-15T16:51:41Z" level=info msg="[HZF6X] 2024/05/15 16:51:41 INFO: Failed to exchange Hello messages with **** at 127.0.0.1:22000-127.0.

Thanks again for your help!

Julio

Hi @jplasencia,

Thanks for the additional information.

We’ve confirmed with our engineering team that relaying and global announcement features are explicitly disabled in the Syncthing configuration applied by Okteto. You can verify this by checking the pkg/k8s/secrets/configXML.go file in our open-source repository at GitHub - okteto/okteto: Develop your applications directly in your Kubernetes Cluster.

As for the log lines you shared, engineering will investigate what could trigger that specific line from Syncthing when managed by Okteto. I’ll keep you updated on any findings and post them here.

Thanks again for your patience and cooperation!

Hi @jplasencia,

If possible, would you be willing to generate an Okteto Doctor file after the relaying log line is printed? This will help us better understand your environment and troubleshoot the issue more effectively.

Please note that we kindly request you to generate the Okteto Doctor file within a minimum viable example repository containing non-confidential source code, environment variables, or other sensitive content.

You can find instructions on how to create the Doctor file at Okteto CLI | Okteto Documentation.

This file should be uploaded to our support site at https://support.okteto.com for further analysis.

Thanks again!

Hi @provecho,

I generated the files using the okteto doctor command. Upon reviewing the files, I don’t see any log entries from the day of the event and I see no entries containing the word “relay”, so I am not certain this would be helpful. I do have Syncthing log entries from the day of the event elsewhere, so I could send these after scraping any identifying information from the logs. Let me know if this would be helpful.

Thanks,
Julio

Hi @provecho,

Something else I noticed in the logs in the below message:

time="2024-05-15T16:46:38Z" level=info msg="[HZF6X] 2024/05/15 16:46:38 INFO: Using discovery mechanism: global discovery server https://discovery-v6.syncthing.net/v2/?nolookup&id=AAAA-BBBB-CCCCC" process=syncthing

This message seems to indicate that Global Discovery is also enabled for Syncthing even though this shows as disabled in the configuration UI. See below:

Is it possible that in some cases Syncthing is running with default settings instead of using the settings okteto is configuring in the configXML.go? It would be helpful if okteto could log the settings it’s using for syncthing when starting up the Syncthing service on my machine and on the pod. This may help in troubleshooting this issue.

Thanks,
Julio

Hi @jplasencia,

In relation to your previous message, yes. We’d like to request that you send us any Syncthing and Okteto logs you have as well as context from where those logs come from or how did you obtain them.

Regarding your setup, please also include which commands you ran to set up your Okteto dev environment along with the command line flags you used. This will help us understand your setup better.

As for the Global Announce log, we have a few questions:

  • Do you have Syncthing installed and active locally standalone from Okteto?
  • Please include the device ID in the message sent to Okteto support. Don’t strip it - it will help us determine if the logs refer to a Syncthing installation managed by Okteto or not.
  • How do you use Okteto? Is it standalone or in tandem with Okteto Self Hosted, the development platform?
  • Can you clarify where those logs originate from? Are they from the local instance or the remote instance in the pod?

Looking forward to hearing back from you!

Best regards,
-Javier.

Hi @provecho,

I will need to spend some time cleaning up the log files before I can send. I will try my best to not remove any values that would be useful to you. When sending the files, is there a way to send these directly to you?

In the meantime and to answer your other questions:

  1. No, I do not use Syncthing nor have I installed syncthing indepently of okteto. Based on the events we analyzed through our investigation and the logs from our firewall, it was the pod created by Okteto that initiated a connection to the Synthing relay. We have nothing to indicate that my machine attempted to connect to the relay. The log event I included in a previous message came from the pod (we use a tool that captures all logs generated by our eks pods). Here’s the log event I am referring to:
time="2024-05-15T16:46:38Z" level=info msg="[HZF6X] 2024/05/15 16:46:38 INFO: Using discovery mechanism: global discovery server https://discovery-v6.syncthing.net/v2/?nolookup&id=AAAA-BBBB-CCCCC" process=syncthing
  1. I have okteto installed locally. I believe I installed it as a vscode extension. So I typically do Ctrl+Shift+P and select okteto up when bringing up my environment

Thanks,
Julio

Hi @jplasencia,

I appreciate your continued cooperation and efforts in supporting our investigation.

We require files to be uploaded to our support portal for traceability reasons.

I’ll be watching and waiting for your request to arrive, and once it does, I’ll take it on directly.

Noting that the log line is written by the remote Syncthing peer is particularly useful information that will aid in our investigation.

Best regards,
-Javier.

Hi @provecho,

I did not realize you had provided a link to create a support ticket and send the files - my apologies. I have now sent the files you requested via a ticket and included a reference to this thread. I hope this helps. As mentioned earlier, the logs generated by the ‘okteto doctor’ do not mention anything about the use of a relay, nor do they include any logs from 5/15 (the date of the relay event)

Thanks,
Julio

1 Like

Hi Julio,

Thank you for your cooperation and providing logs about your setup. Our engineering team has been analyzing your logs, and we found that the log line from a Syncthing instance managed by Okteto related to a relay connection was produced due to specific circumstances.

In the file you uploaded, which corresponds to the logs for the Syncthing instance living in the remote pod, we found a trace where it can be observed that Syncthing was deleting its own configuration.

level=info msg="[ATOPH] 2024/05/15 16:46:30 VERBOSE: Finished syncing \"okteto-1\" / \"syncthing/config.xml.v32\" (delete file): Success" process=syncthing

This could be caused by a change in the Okteto manifest dev.sync section, where folders are specified to be synced bi-directionally between local and the pod.

sync:
  - ./example:/var # /var refers to the remote pod filesystem

Around this log, there was significant sync activity, including many delete operations, likely related to a VSCode server instance running inside the remote pod. Running VSCode inside a dev pod is compatible, but having its installation paths synced can cause unpredictable behavior due to the high number of files (e.g., node_modules).

After that activity, we spotted log lines pointing to both a remote Syncthing process crash and a local client disconnection.

time="2024-05-15T16:46:35Z" level=error msg="process exited with error status -1" error="signal: broken pipe" process=syncthing
time="2024-05-15T16:46:35Z" level=info msg="[ATOPH] 2024/05/15 16:46:35 INFO: \"1\" (okteto-1): Failed to sync 229 items" process=syncthing

The Okteto CLI ensures Syncthing is reconciled with the correct configuration. If the instance is not healthy, it will be restored to a correct configuration, but only if the CLI is connected or starting a new session. Without the CLI connected, the software inside the pod is limited to restarting the process if it fails.

time="2024-05-15T16:46:36Z" level=info msg="killing process syncthing" process=syncthing
time="2024-05-15T16:46:36Z" level=info msg="process syncthing killed" process=syncthing

This happening along with the configuration being deleted caused the logs we found:

time="2024-05-15T16:46:36Z" level=info msg="[start] 2024/05/15 16:46:36 INFO: Default folder created and/or linked to new config" process=syncthing
time="2024-05-15T16:46:36Z" level=info msg="[start] 2024/05/15 16:46:36 INFO: Default config saved. Edit /var/syncthing/config.xml to taste (with Syncthing stopped) or use the GUI" process=syncthing

We found that Syncthing started with the basic default configuration, which has relays enabled but is incompatible with your local setup. This means Okteto will attempt a reconciliation as soon as a new session is started. No synced data will flow in or out of the pod until the new dynamic ID is exchanged with another Syncthing peer and vice versa.

Given this analysis, our engineering team won’t take any inmediate action, considering that the dev pod configuration was corrupted due to incompatible settings used in the Okteto manifest. This corruption would be reverted as soon as a new dev session is spun up.

The product team will evaluate adding guards to the Okteto manifest parser to warn users of incompatible paths being used in the dev.sync section.

Thank you again for reporting this concern to Okteto.

Best regards,
-Javier.

Hi @provecho,

Thank you for the investigation. Here are some notes:

It appears that Syncthing is rebooting and reverting to the default configuration following a failure. do you have any idea as to why this would only happen in some cases? Looking through the logs, I can see several log messages with the following:

Remote change detected in folder \"okteto-1\": deleted file syncthing/config.xml.v32" process=syncthing

Is there a sync configuration change or a command I can include in the okteto manifest in order to ensure this config file is never deleted?

Indeed, it would be beneficial for Okteto to introduce safeguards to prevent such issues. Meanwhile, as this is evaluated, I am interested in exploring any configurations on our side that could help avoid this going forward.

Thanks,
Julio

Hi @jplasencia,

I’d like to provide some guidance on optimizing your Okteto Manifest.

For deletion logs, when specifying destinations in the dev.sync section, please refrain from including the /var/syncthing folder or its parent directories. Additionally, avoid pointing soft symlinks to these paths.

For reboot logs, it’ s recommended to steer clear of watching directories with an extremely large number of files (e.g., node_modules). To ignore such hot paths, consider creating a .stignore file instead. Installing VS Code Server inside a dev pod and watching its installation directory will increase the file count to be watched.

Best regards,
-Javier.