Pangolin performance via Newt #512
Replies: 28 comments 68 replies
-
|
Is nobody else experiencing this problem ? |
Beta Was this translation helpful? Give feedback.
-
|
I'm having the same issue. Maybe the problem is with Traefik( or gerbil ??)since the issue persists even locally. Some people have fixed it by switching VPS. Temporarily, I solved it by enabling the Cloudflare Proxy, which tripled my upload speed. |
Beta Was this translation helpful? Give feedback.
-
|
I’m also experiencing a severe performance issue with my Pangolin/Newt/Gerbil setup, and I can’t understand why. It works flawlessly for serving low-bandwidth workloads (websites) — it's great, and I love it. My setup is as follows:
What I’ve tried to identify the root cause:Suspected CPU bottleneck (WireGuard overhead)I initially suspected that Newt and/or Gerbil/Traefik were CPU-bound due to WireGuard overhead. Suspected bandwidth bottleneckI ran
Everything seems to have a decent internet connection. Suspected network throughput between componentsI deployed
So everything seems to have proper link speed internally despite the abstraction layers (VMs, Docker, VPS). When monitoring Given the above, it doesn’t appear to be a CPU, RAM, or bandwidth issue — which leaves us with either the Newt/Gerbil/Traefik codebase or their configuration. Let me know if you need more details or logs — I’d be happy to help debug this further. |
Beta Was this translation helpful? Give feedback.
-
|
btw. i would like to share my workaround until we have this. It´s simple. Basically let every service/ressource have it´s own newt tunnel/sites. |
Beta Was this translation helpful? Give feedback.
-
|
I've been pulling my hair out over this until I found this discussion. I had no problems at all for several months, but only with low-bandwidth services. Recently I added my photo library to the setup, and then realized I'm not able to stream any videos reliably. The behavior I see is consistently the following: with larger files, the initial transfer speed is high for a few seconds (meaning a few megabytes), and then it rapidly drops to unacceptable levels (around 10 kb/s!). After a quite long time (1-2 minutes) it equally rapidly starts to pick up speed again and then keeps saturating my line until the transfer is complete (several mb/s). If I interrupt the transfer and restart, it starts all over again with the same behavior. During the time it's only dripping, the load on the VPS is non-existent. Once it picks up speed again I see Traefik at ~25% CPU. I did extensive testing to check for bandwidth and speeds all along the setup, and I'm able to saturate my public line to and from the VPS (Strato Germany) no matter what (raw transfer through SCP, with or without Docker etc.). I'm also able to max out the local GBit network when I access the services only locally. It doesn't sound like the exact symptoms you describe, but similar in that I cannot find any issues at any stage of the setup, but when I add Newt to the equation it reproducible shows this behavior. |
Beta Was this translation helpful? Give feedback.
-
|
I migrated my Pangolin instance from the default SQLite database (pangolin:1.12.2) to a PostgreSQL backend (pangolin:postgresql-1.12.2). It was definitely worth trying, and the performance boost for the UI and static pages is appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
This discussion seems to be one of the most active ones, yet I haven’t seen this issue being addressed. I used to really like Pangolin, but it’s time for me to move on from the project. Thank you very much for everything so far. I wish the project and everyone involved all the best — and as they say, paths always cross again. |
Beta Was this translation helpful? Give feedback.
-
|
Unfortunately issue still persists in the latest newt version. (IPV6 only) |
Beta Was this translation helpful? Give feedback.
-
|
After struggling with inconsistent throughput over Newt, I switched to a Basic WireGuard Site. Setup was pretty painless and immediately fixed the “bursty stall/buffering” behavior for long-lived streams. ProblemPersistent connections would stall in bursts over Newt tunnels (CPU wasn’t pegged). Shorter/lower-bitrate flows were ok which strongly pointed to PMTU/fragmentation (TCP bursts + long stalls). FixUse a Basic WireGuard Site and route traffic over kernel WireGuard. Key points:
Solution1) Create a Basic WireGuard Site in Pangolin
Note: the server-side WG interface may live inside the docker exec -it gerbil ip -br addr # should show wg0 with e.g. 100.89.x.x/242) Point the resource upstream to the WG peer IP (NOT the LAN IP)In Pangolin, set the resource Target/Upstream to: 3) On the WG peer gateway: DNAT only what you need to {TARGET_IP}:{PORT}Put the following in /etc/wireguard/wg0.conf on the WG peer (replace {TARGET_IP}, {PORT}, and your LAN interface name if it’s not eth0): [Interface]
Address = <WG_PEER_IP_CIDR>
PrivateKey = ...
MTU = 1280
# Forward <PORT> from WG -> target
PostUp = sysctl -w net.ipv4.ip_forward=1
PostUp = iptables -t nat -A PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostUp = iptables -t nat -A POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostUp = iptables -A FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostUp = iptables -A FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT
# MSS clamp (prevents PMTU blackholes / burst-stall behavior)
PostUp = iptables -t mangle -A FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostUp = iptables -t mangle -A FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t nat -D PREROUTING -i %i -p tcp --dport <PORT> -j DNAT --to-destination <TARGET_IP>:<PORT>
PostDown = iptables -t nat -D POSTROUTING -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -o eth0 -p tcp -d <TARGET_IP> --dport <PORT> -j ACCEPT
PostDown = iptables -D FORWARD -i eth0 -o %i -p tcp -s <TARGET_IP> --sport <PORT> -j ACCEPT
PostDown = iptables -t mangle -D FORWARD -i %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = iptables -t mangle -D FORWARD -o %i -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu4) Persist across rebootssystemctl enable --now wg-quick@wg0 Quick sanity checks |
Beta Was this translation helpful? Give feedback.
-
|
Same issue here. |
Beta Was this translation helpful? Give feedback.
-
|
Yeah, I’ve been running into the same issue with large file streams. I also tried replacing Newt with a plain WireGuard connection, but unfortunately the speeds still dropped quite a bit. For now, I’ve had to switch to a different tool for the affected applications. Really hoping this gets improved at some point. Everything else works great, so it would be amazing to see this resolved. |
Beta Was this translation helpful? Give feedback.
-
|
Same issue here. I understand there's likely no easy fix, bu it seems like a pretty critical issue. |
Beta Was this translation helpful? Give feedback.
-
|
I've encountered same issue. Using a 2vCPU VPS with Pangolin installed and running on it. Newt, as a service on my Win11P box. Have nice 'elegant' solution that I can point to (plex.mydomain.com, overseer.mydomain.com, etc.) but Plex streaming is limited to 720p/4mbps through the Newt tunnel... I've had to switch out to Tailscale which isn't quite as elegant, as it requires something installed on my Plex users' machines. I too would really like to see this resolved as Pangolin is a super-neat solution otherwise. |
Beta Was this translation helpful? Give feedback.
-
|
Same here. Streaming 4K over newt is impossible. Working ok on standard Wireguard. |
Beta Was this translation helpful? Give feedback.
-
|
Racknerd 2vcpu streaming 4k was working but it took good minute to start switched to std WireGuard and its almost instant |
Beta Was this translation helpful? Give feedback.
-
|
Same issue for me with newt ->120-150 Mbps (6-30 MB/s). iperf between the local vm and my vps (without tunnel) -> 6 Gbps. [EDIT] With wireguard instead of newt -> 600+ Mbps . It's 5-6 times faster. |
Beta Was this translation helpful? Give feedback.
-
|
This issue is painful and it really has be needing an alternate system despite Pangolin serving me so well. What I find puzzling is why did this happen some time in recent months? I recall my network transfer speed being almost as fast as local in the past. |
Beta Was this translation helpful? Give feedback.
-
|
I get usable 15MBsec speeds from my Windows Host using Newt via Docker. I'd like it to be more but compared to the others here I can't complain. My VPS offers 500Mbit speeds and iperf3 to the VPS really provides that speed. When downloading large files from Nextcloud, all involved containers (Nextcloud, Authentik, Nginx and ultimately Newt) consume a lot of CPU power on the host with almost all 16 cores used. If I reduce the CPU max speed to 90% (via Power Settings) the DL speeds will drop to 13MBsec, at max speeds I get 15MBsec Upgrading the VPS from 1 to 2 cores did absolutely nothing, however. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Hey all just an update that on newt i was getting 32mbps, switched to wireguard and am slightly below 1000mbps now. |
Beta Was this translation helpful? Give feedback.
-
|
it is almost a year and we still have this issue :)) |
Beta Was this translation helpful? Give feedback.
-
|
After struggling with this issue for several weeks, I found the solution for my configuration. The main problem comes mainly from Docker, so don't expect any more speed with it. You MUST install Newt AND Gerbil in pass-through mode (host mode, not bridge mode) to allow Gerbil and Newt to create a true WG virtual card on your servers. Without this, you can't exceed a speed of ~300-400 MB/s. In addition, for 4k streamers, in my opinion, you MUST have a dedicated WireGuard site on your router/server (and only for this use). It's a painfull setup for some, but it's worth it. For example, I now reach 750-850 MB/s for a 950MB/s real capable upload speed. |
Beta Was this translation helpful? Give feedback.
-
|
I'd love to see a straightforward set of instructions for having Pangolin on a VPS, and setting up a WG client at the Windows 11 end... I can't be the only one with this setup (currently with Newt, and struggling with the speed for streaming media). I'm using Tailscale now which does work but I'd like to make use of the VPS I'm paying for. Everyone says setting a WG tunnel up is the answer but I can't find anywhere that instructs exactly how to do it - I mean, like I'm an idiot and have no experience of WG. They either have Windows at both ends or a Linux distro where my Win 11 box is... Please? |
Beta Was this translation helpful? Give feedback.
-
|
Not sure if anyone else has noticed a difference but since I updated Newt to v1.10.0 and Pangolin (self hosted on a VPS) to v.15.4 - things have markedly improved and I'm able to now stream directly from my media server at 1080p/12mbps (previously I couldn't achieve anything above 720p/4mbps, as many above have also mentioned). Perhaps someone else can join these dots as to which is having the desired effect, but I installed Newt using the newt_windows_installer.exe as opposed to the previous version I had, which was just the Newt.exe - (always been installed/set-up as a service on Win 11 Pro). This time around, I noticed the installer also installed wintun.dll, along with Newt.exe. Now, I do remember some mention of perhaps needing wintun.dll in the Pangolin/Newt docs but as Newt 'appeared' to be working ok, I didn't think any more about it at the time. Maybe it's the new version of Newt or maybe it's the inclusion of the wintun.dll into the Newt directory this time around but so far, it's working really well for me now and just what I wanted it to be. Thanks 🙏🏻 |
Beta Was this translation helpful? Give feedback.
-
|
Hey all, I wanted to jump in as one of the newest members of the Pangolin team to keep communication clear on this issue. I know this has been open for a while, but a lack of active replies does not mean this discussion has been deprioritized. @oschwartz10612 has an active task to investigate and improve performance, and I've also opened follow-up tasks to improve Go-side performance over time. The performance work merged into dev so far includes: In Newt, we currently use two WireGuard netstack implementations: netstack and netstack2. The main tunnel to Gerbil is still using netstack, while child/downstream tunnels use netstack2, which contains the newer performance improvements. A full switch is not just a simple toggle. Even though the implementations are compatible, we still need proper test coverage before promoting this broadly, to avoid inconsistencies or regressions. |
Beta Was this translation helpful? Give feedback.
-
|
@LaurenceJJones Just to clarify, this did not make it into 1.10.2 or 1.10.3, right? |
Beta Was this translation helpful? Give feedback.
-
|
I tried a lot of things including Newt inside Docker, outside docker, a direct connection (ie DNS to open port without pangolin) and ultimately Pangolin + Wireguard Basic connecting to a Unifi Router. What's pretty clear is that VPS performance and also Host performance play a minor role. When using immich, I was getting top speeds with a direct connection and with wireguard basic. It goes up to 25MBsec and beyond, but it still starts pretty slow, i.e 600KBsec-3MBsec. That's apparently a TCP limitation and after about 20 seconds or so it should go through the roof. Wireguard basic gives me much better speeds than newt overall. So I'm happy it works and that I can keep using Pangolin that way. Interestingly, downloading a large file via Nextcloud will work with 15MBsec using both Newt and WG-Basic, but WG has less fluctuations. Here the difference is somehow minor compared to immich. I suspect Nextcloud Overhead but who knows... |
Beta Was this translation helpful? Give feedback.
-
|
Food for thought...speed while using Pangolin in some situations may have little to do with Pangolin or newt. Some CGNAT's don't play well with UDP—especially when it comes to video streaming. WireGuard is always over UDP. In order to test if UDP behind a CGNAT is impacting speed, try accessing a resource behind an SSH reverse tunnel (which is always over TCP) and compare the speed to accessing the resource behind Pangolin. For example, Jellyfin over a reverse SSH tunnel: |
Beta Was this translation helpful? Give feedback.





Uh oh!
There was an error while loading. Please reload this page.
-
Hey everyone,
First of all, thanks for the great work on Pangolin – I really like how clean and straightforward it is to set up.
I’m currently testing Pangolin in a fairly performant setup:
• Home connection: 1 Gbit fiber
• VPS: 2.5 Gbit throughput
• Service behind Pangolin: Pingvin Share (self-hosted file sharing tool)
The problem I’m running into is related to throughput performance. When I upload a file to Pingvin through Pangolin, I only get about 4 MB/s (≈32 Mbit/s), and the speed occasionally drops even further. The same happens when I try to download the file back – it’s consistently low and doesn’t reflect the bandwidth I actually have on either side.
So now I’m wondering:
Where’s the bottleneck? Is it:
• A known limitation or default setting in Pangolin?
• Related to WireGuard performance under load?
• Possibly something in the configuration (like MTU size, buffer sizes, etc.)?
I’d really appreciate if someone could share similar experiences, tuning tips, or diagnostic steps I might try.
Thanks in advance!
Alex
Beta Was this translation helpful? Give feedback.
All reactions