Sponsored Links

Openreach 900mbps vs Virgin Media 1gig

Setting aside the performance issues its worth bearing in mind that if you have any issues VM can be, in my expirence, a very difficult company to deal with, their customer service isnt good.
 
Funny enough my experience is opposite on web browsing, because VM is a natural 1500 MTU and not using an obsolete PPPoE protocol, it has faster web browsing due to no MTU negotiation delays. Granted this can be mitigated for most destinations with baby jumbo frames.

But in regards to the OP with latency been the prime priority, I would go FTTP.

VM (if in an area not over utilised) is typically better for throughput stability, and less risk of packet loss when under load. But has less stable latency due to DOCSIS trait's.
What MTU negotiation delays are you referring to here?

There is no real negotiation with IPv4. You tell the other side the MTU you can handle, they do likewise, off you go. Whether on 1500 or 1200 it's taken care of in the initial 2 packets of the TCP handshake. If on PPPoE your router should clamp MSS and send the appropriate value to the server.


But the router doing the clamp.

IPv6 isn't supported by VM so dual stack not an issue there but again going through PMTU discovery can be avoided either by router MSS clamping or through IPv6 router advertisement including MTU: that's what the ability to advertise MTU is there for.

UDP, the one without any handshake on MTU, is now mostly QUIC traffic: HTTP/3. Clients take a pretty conservative view on PMTU to avoid fragmentation and the 8 bytes for PPP is irrelevant, they don't try 1500.

I have always seen faster browsing from FTTP due to the lower latency and better jitter. Across one DNS request and one HTTP request it might not make that much difference, across dozens of requests, dozens of responses and dozens of serialised requests it adds up.

I'm not sure about the throughput stability or packet loss under load either. Should be nearly impeccable and nearly zero on both in most use cases as those rely on higher level protocols to manage throughput and avoid loss. ISPs may briefly buffer, they will shape towards customer, they won't play with transport layer.

I've not seen any issues in a newly built at the time VM area, or 4 different full fibre and 3 different FTTC ISPs over Openreach. Or an altnet for that matter.
 
Last edited:
What MTU negotiation delays are you referring to here?

There is no real negotiation with IPv4. You tell the other side the MTU you can handle, they do likewise, off you go. Whether on 1500 or 1200 it's taken care of in the initial 2 packets of the TCP handshake. If on PPPoE your router should clamp MSS and send the appropriate value to the server.


But the router doing the clamp.

IPv6 isn't supported by VM so dual stack not an issue there but again going through PMTU discovery can be avoided either by router MSS clamping or through IPv6 router advertisement including MTU: that's what the ability to advertise MTU is there for.

UDP, the one without any handshake on MTU, is now mostly QUIC traffic: HTTP/3. Clients take a pretty conservative view on PMTU to avoid fragmentation and the 8 bytes for PPP is irrelevant, they don't try 1500.

I have always seen faster browsing from FTTP due to the lower latency and better jitter. Across one DNS request and one HTTP request it might not make that much difference, across dozens of requests, dozens of responses and dozens of serialised requests it adds up.

I'm not sure about the throughput stability or packet loss under load either. Should be nearly impeccable and nearly zero on both in most use cases as those rely on higher level protocols to manage throughput and avoid loss. ISPs may briefly buffer, they will shape towards customer, they won't play with transport layer.

I've not seen any issues in a newly built at the time VM area, or 4 different full fibre and 3 different FTTC ISPs over Openreach. Or an altnet for that matter.
Well MSS negotiation is probably the more accurate description, but I said MTU as thats what more people are familiar with.

I have on occasion seen delays (when not using baby jumbo frames) over PPPoE whilst MSS is negotiated. However part of my issue there maybe due to how pfSense handles MSS clamping, if scrub is disabled it doesnt do an MSS clamp at all. I disable scrub because it has some bugs which are probably never going to get fixed.

I did document my issues with throughput stability on providers who have reduced buffer bloat built into their network design, e.g. on all VDSL providers I could get packet loss on my connection once above about 50% utilisation, easily repeated using iperf. Granted though this might not be designed into FTTP connectivity (something I have yet to experience).

On VM (as well as mobile network), I need to push close to 100% for the same packet loss to be visible. So I get much better throughput reliability.

There has been other reports on the net from other FTTC customers also, all ended without resolution. The problem there is that BTw require ISPs to rate limit, Openreach possibly do some kind of policing on the link from cabinet to exchange (would assume this plays no part unless link is saturated), the DSL modem being used, any policing the ISP might be doing on top of what is already being asked of them, lots of variables which made it very difficult to diagnose.

But what I know is this, all the symptoms I had, went away when testing with either 4G or VM cable broadband. This to me meant whatever was causing the aggressive dropping of packets was something outside of my internal network although the modem remained a possibility (I did also test with a different VDSL modem with a different chipset though).

If you are interested, some of the symptoms were things like.

Downloading on steam causing massive packet loss. Internally shaping the downstream helped, but the packet loss was so extreme I would have to shape really aggressively, traffic loss would be at near 0% once steam was utilising less than 40% of line capacity. So quite extreme. Shaping via the steam client limiter had similar results, the problem with both methods of shaping is that the download would average out slower, but would be spiky to get there, so any of these spikes could still trigger packet loss. Capping the RWIN however of course would provide a smooth reduction of bandwidth and allow much higher utilisation levels, but difficult to do on a modern windows OS. My final configuration was to actually shape the ACK's by restricting the upstream on steam downloads so that kept the congestion window down much better vs randomly dropping inbound packets. Without any of this doing things like watching a stream with a steam download in the background was impossible.

Even single threaded downloads would cause less evere but still bad enough packet loss on my FTTC connection. So they had to be throttled as well, if I wanted to multi task on the connection.

When I reduced the AAISP rate limiter (they allow you to tweak it but not disable it), I observed it made the symptoms happen earlier at a lower utilisation. The feature ironically is advertised as allowing things like VOIP to work smoothly whilst downloading. But for me it just caused everything to drop packets when throughput approached the limit. It could be set as high as 110% or so which in theory would be disabling it, but traffic can of course still spike to those levels. At the 110% level I had the best results.

But as if by magic now on cable broadband these issues have disappeared. Now of course I do have 1 gbit download (1.1 on modem side) which obviously has a big impact as its much harder to saturate, but if I look at it from a % perspective, it doesnt start randomly dropping packets with as low as 40% utilisation, and all my internal network equipment is the same as before (at least was for several months, I have now upgraded my main switch/AP). The 4g testing only had a little under 150 mbit capacity, and had a huge quality improvement over the FTTC testing, it could hit 96-97% on iperf before dropping packets whilst on FTTC it was struggling to get to 40%. It was when I started seeing this I made the decision to move off the FTTC platform. I prefer to fix it by changing ISP than fighting a battle I had no solution to.

In the cases I found on the net where a resolution was posted, it was basically moving off the FTTC platform, the same as I have done. All of these examples on the net were Openreach UK, not a single report from FTTC in another country. This feeds into my theory of it being some kind of traffic policing effect.

What I havent done is tested the peformance of the AAISP rate limiter on my L2TP connection (with iperf) I have setup over VM (400mbit), I am pretty sure that feature is still there on L2TP, so would be interesting to see if that has the same symptoms I had on FTTC.

Now maybe you are saying FTTP doesnt have the same inherit problems as FTTC, but you also said you had no such issues on FTTC either. Dont VM allow much more buffering vs FTTC/FTTP isp's? If I see a saturated VM TBB graph usually it shows a big jump in latency, but when I see a saturated FTTC/FTTP graph it might show a small increase in latency but will be showing packet loss. So from what I can tell VM prefer buffering to dropping packets, whilst the xDSL ISP's seem to prefer dropping packets. I have the opinion that a delayed packet is a lesser evil than a dropped packet.

End of the day I can only post based on my experience, all of my theories as to the cause will remain theories as I dont have the luxury of technical details outside of my network as well as testing those elements indepedently.
 
VM 1G vs Vodafone 900 . Big difference .
 

Attachments

  • Screenshot 2023-12-20 130413.webp
    Screenshot 2023-12-20 130413.webp
    30.3 KB · Views: 181
  • Screenshot 2023-12-20 130648.webp
    Screenshot 2023-12-20 130648.webp
    33 KB · Views: 173
Sponsored Links
VM 1G vs Vodafone 900 . Big difference .
Is that VM DOCSIS or XGS-PON? Would that even make a difference? (if the latency is due to VM backhauls maybe?!?)

I ask as it looks like we're getting XGS-PON soon via Nexfibre and our only choice of ISP (initially) is VMO2.
 
Is that VM DOCSIS or XGS-PON? Would that even make a difference? (if the latency is due to VM backhauls maybe?!?)

I ask as it looks like we're getting XGS-PON soon via Nexfibre and our only choice of ISP (initially) is VMO2.
Docsis . I Think XGS-Pon is supposed to better ? Isn't it full fibre ?
 
After looking in to XGS-PON the latency seems to be in line with what I have on Vodafone . In the 5 thru 10 ms range . Of course this is probably area dependant as well . With some places not performing as well as others due to backhauls.
 
After looking in to XGS-PON the latency seems to be in line with what I have on Vodafone . In the 5 thru 10 ms range . Of course this is probably area dependant as well . With some places not performing as well as others due to backhauls.
Does VM's XGS-PON go back to the local UBR and then onto the core like on HFC?
 
Since CF is now available for me, and wayleave blocker removed, I am in a position I can compare VM gig1 to a PPPoE FTTP gigabit service on my equipment.

The bug I mentioned with FreeBSD/pfSense is now fixed though so I can enable scrub again which in turn will allow me to use MSS clamping and in theory should work as well as XGS suggested.

I do remain happy with the VM service, but my deal is due to end in 3 months. As much as I would love to do it, I dont think I can justify having two gigabit connections so a decision will need to be made what happens after that point.
 
Sponsored Links
My VM on wifi . I'm coming to the end of my contract @ the end of January so thinking I'll switch over to FTTP that has just been available here this year . I don't game so all in all VM wasn't to bad for me. But I'm in a area that has low use of VM . Wasn't for the price increase out of contract I'd probably stay. View attachment 9195
That is quite a high amount of ping.... I get about 15-17ms on Virgin HFC!
 
That is quite a high amount of ping.... I get about 15-17ms on Virgin HFC!
I've switched to FTTP now through Vodafone . Best switch I've made . I sometimes stream stuff like this (

Transformers Rise of the Beasts 2023 UHD Blu-ray 2160p HEVC Atmos TrueHD 7.1​

Size: 96.40 GB ) With Virgin these wouldn't stream without buffering . People who say 25mbps connections are all you need to stream 4K are only streaming from Netflix etc.. Not large true 4k video files. Ping of 4ms is a big improvement over Virgin too.

1705065603059.webp
 
I've switched to FTTP now through Vodafone . Best switch I've made . I sometimes stream stuff like this (

Transformers Rise of the Beasts 2023 UHD Blu-ray 2160p HEVC Atmos TrueHD 7.1​

Size: 96.40 GB ) With Virgin these wouldn't stream without buffering . People who say 25mbps connections are all you need to stream 4K are only streaming from Netflix etc.. Not large true 4k video files. Ping of 4ms is a big improvement over Virgin too.

View attachment 10224
Whoa!
Nice!
That said, sometimes I stream similar stuff over Three or EE 5G and it works just fine, but I bet over FTTP it'd be even better.
 
Sponsored Links
I've switched to FTTP now through Vodafone . Best switch I've made . I sometimes stream stuff like this (

Transformers Rise of the Beasts 2023 UHD Blu-ray 2160p HEVC Atmos TrueHD 7.1​

Size: 96.40 GB ) With Virgin these wouldn't stream without buffering . People who say 25mbps connections are all you need to stream 4K are only streaming from Netflix etc.. Not large true 4k video files. Ping of 4ms is a big improvement over Virgin too.

View attachment 10224
Is that from RarBG or TorrentGalaxy? : D
 
Well for a while I have VM IPoE gig1, and CityFibre PPPoE 1000/1000

Latency is 4ms which is insane, insane upload speed.

However when downloading packet loss is in the gutter, like it was on VDSL, investigating, and keep finding pfSense PPPoE packet loss posts. :(

Will report back if I fix it. I also have a more powerful kit I have yet to deploy on pfSense. Of course PPPoE might be nothing to do with it as VM do seem to allow more buffered packets than other ISPs.

--

Will come back to this later, I have significantly reduced the loss by making sure max-mss working properly and baby jumbo frames, 1452 was murdering it, that also boosted fast.com from 870mbps which was a little low up to 930-940. I also made the CPU clock more aggressive, tomorrow or Wednesday ill get the more powerful kit in place to feed that PPPoE beast
 
Last edited:
Top
Cheap BIG ISPs for 100Mbps+
Community Fibre UK ISP Logo
150Mbps
Gift: None
Virgin Media UK ISP Logo
Virgin Media £22.99
132Mbps
Gift: None
Vodafone UK ISP Logo
Vodafone £24.00 - 26.00
150Mbps
Gift: None
NOW UK ISP Logo
NOW £24.00
100Mbps
Gift: None
Plusnet UK ISP Logo
Plusnet £25.99
145Mbps
Gift: £50 Reward Card
Large Availability | View All
Cheapest ISPs for 100Mbps+
Gigaclear UK ISP Logo
Gigaclear £17.00
200Mbps
Gift: None
Community Fibre UK ISP Logo
150Mbps
Gift: None
Virgin Media UK ISP Logo
Virgin Media £22.99
132Mbps
Gift: None
Hey! Broadband UK ISP Logo
150Mbps
Gift: None
Youfibre UK ISP Logo
Youfibre £23.99
150Mbps
Gift: None
Large Availability | View All
Sponsored Links
The Top 15 Category Tags
  1. FTTP (6028)
  2. BT (3639)
  3. Politics (2721)
  4. Business (2440)
  5. Openreach (2405)
  6. Building Digital UK (2330)
  7. Mobile Broadband (2146)
  8. FTTC (2083)
  9. Statistics (1902)
  10. 4G (1816)
  11. Virgin Media (1764)
  12. Ofcom Regulation (1582)
  13. Fibre Optic (1467)
  14. Wireless Internet (1462)
  15. 5G (1407)
Sponsored

Copyright © 1999 to Present - ISPreview.co.uk - All Rights Reserved - Terms  ,  Privacy and Cookie Policy  ,  Links  ,  Website Rules