What MTU negotiation delays are you referring to here?
There is no real negotiation with IPv4. You tell the other side the MTU you can handle, they do likewise, off you go. Whether on 1500 or 1200 it's taken care of in the initial 2 packets of the TCP handshake. If on PPPoE your router should clamp MSS and send the appropriate value to the server.
But the router doing the clamp.
IPv6 isn't supported by VM so dual stack not an issue there but again going through PMTU discovery can be avoided either by router MSS clamping or through IPv6 router advertisement including MTU: that's what the ability to advertise MTU is there for.
UDP, the one without any handshake on MTU, is now mostly QUIC traffic: HTTP/3. Clients take a pretty conservative view on PMTU to avoid fragmentation and the 8 bytes for PPP is irrelevant, they don't try 1500.
I have always seen faster browsing from FTTP due to the lower latency and better jitter. Across one DNS request and one HTTP request it might not make that much difference, across dozens of requests, dozens of responses and dozens of serialised requests it adds up.
I'm not sure about the throughput stability or packet loss under load either. Should be nearly impeccable and nearly zero on both in most use cases as those rely on higher level protocols to manage throughput and avoid loss. ISPs may briefly buffer, they will shape towards customer, they won't play with transport layer.
I've not seen any issues in a newly built at the time VM area, or 4 different full fibre and 3 different FTTC ISPs over Openreach. Or an altnet for that matter.
Well MSS negotiation is probably the more accurate description, but I said MTU as thats what more people are familiar with.
I have on occasion seen delays (when not using baby jumbo frames) over PPPoE whilst MSS is negotiated. However part of my issue there maybe due to how pfSense handles MSS clamping, if scrub is disabled it doesnt do an MSS clamp at all. I disable scrub because it has some bugs which are probably never going to get fixed.
I did document my issues with throughput stability on providers who have reduced buffer bloat built into their network design, e.g. on all VDSL providers I could get packet loss on my connection once above about 50% utilisation, easily repeated using iperf. Granted though this might not be designed into FTTP connectivity (something I have yet to experience).
On VM (as well as mobile network), I need to push close to 100% for the same packet loss to be visible. So I get much better throughput reliability.
There has been other reports on the net from other FTTC customers also, all ended without resolution. The problem there is that BTw require ISPs to rate limit, Openreach possibly do some kind of policing on the link from cabinet to exchange (would assume this plays no part unless link is saturated), the DSL modem being used, any policing the ISP might be doing on top of what is already being asked of them, lots of variables which made it very difficult to diagnose.
But what I know is this, all the symptoms I had, went away when testing with either 4G or VM cable broadband. This to me meant whatever was causing the aggressive dropping of packets was something outside of my internal network although the modem remained a possibility (I did also test with a different VDSL modem with a different chipset though).
If you are interested, some of the symptoms were things like.
Downloading on steam causing massive packet loss. Internally shaping the downstream helped, but the packet loss was so extreme I would have to shape really aggressively, traffic loss would be at near 0% once steam was utilising less than 40% of line capacity. So quite extreme. Shaping via the steam client limiter had similar results, the problem with both methods of shaping is that the download would average out slower, but would be spiky to get there, so any of these spikes could still trigger packet loss. Capping the RWIN however of course would provide a smooth reduction of bandwidth and allow much higher utilisation levels, but difficult to do on a modern windows OS. My final configuration was to actually shape the ACK's by restricting the upstream on steam downloads so that kept the congestion window down much better vs randomly dropping inbound packets. Without any of this doing things like watching a stream with a steam download in the background was impossible.
Even single threaded downloads would cause less evere but still bad enough packet loss on my FTTC connection. So they had to be throttled as well, if I wanted to multi task on the connection.
When I reduced the AAISP rate limiter (they allow you to tweak it but not disable it), I observed it made the symptoms happen earlier at a lower utilisation. The feature ironically is advertised as allowing things like VOIP to work smoothly whilst downloading. But for me it just caused everything to drop packets when throughput approached the limit. It could be set as high as 110% or so which in theory would be disabling it, but traffic can of course still spike to those levels. At the 110% level I had the best results.
But as if by magic now on cable broadband these issues have disappeared. Now of course I do have 1 gbit download (1.1 on modem side) which obviously has a big impact as its much harder to saturate, but if I look at it from a % perspective, it doesnt start randomly dropping packets with as low as 40% utilisation, and all my internal network equipment is the same as before (at least was for several months, I have now upgraded my main switch/AP). The 4g testing only had a little under 150 mbit capacity, and had a huge quality improvement over the FTTC testing, it could hit 96-97% on iperf before dropping packets whilst on FTTC it was struggling to get to 40%.
It was when I started seeing this I made the decision to move off the FTTC platform. I prefer to fix it by changing ISP than fighting a battle I had no solution to.
In the cases I found on the net where a resolution was posted, it was basically moving off the FTTC platform, the same as I have done. All of these examples on the net were Openreach UK, not a single report from FTTC in another country. This feeds into my theory of it being some kind of traffic policing effect.
What I havent done is tested the peformance of the AAISP rate limiter on my L2TP connection (with iperf) I have setup over VM (400mbit), I am pretty sure that feature is still there on L2TP, so would be interesting to see if that has the same symptoms I had on FTTC.
Now maybe you are saying FTTP doesnt have the same inherit problems as FTTC, but you also said you had no such issues on FTTC either. Dont VM allow much more buffering vs FTTC/FTTP isp's? If I see a saturated VM TBB graph usually it shows a big jump in latency, but when I see a saturated FTTC/FTTP graph it might show a small increase in latency but will be showing packet loss. So from what I can tell VM prefer buffering to dropping packets, whilst the xDSL ISP's seem to prefer dropping packets. I have the opinion that a delayed packet is a lesser evil than a dropped packet.
End of the day I can only post based on my experience, all of my theories as to the cause will remain theories as I dont have the luxury of technical details outside of my network as well as testing those elements indepedently.