
Nokia’s Bell Labs research division has established two new world records in submarine optical communications. The first achieved a net data rate of 800Gbps (Gigabits per second) on a single optical wavelength at a distance of 7865km, while the second hit a net throughput of 41Tbps (Terabits) over 291 km via a C-band unrepeated transmission system.
In terms of the first 800Gbps record, the distance Nokia achieved is said to be two times greater than what current state-of-the-art equipment can transmit at the same capacity (7865km is approximately the geographical distance between Seattle and Tokyo). The milestone was achieved at the company’s optical research testbed in Paris-Saclay, France.
The second record of 41Tbps over 291 km via a C-band unrepeated transmission system – commonly used to connect islands and offshore platforms to each other and the mainland proper – was set by Nokia subsidiary Alcatel Submarine Networks (ASN) at their research testbed facility, also in Paris-Saclay. The previous record for this distance was a speed of 35Tbps.
Advertisement
Nokia Bell Labs and ASN were able to achieve both world records through the “innovation of higher-baud-rate technologies“. “Baud” measures the number of times per second that an optical laser switches on and off, or “blinks”. Higher baud rates mean higher data throughput and will allow future optical systems to transmit the same capacities per wavelength over far greater distances.
In the case of transoceanic systems, these increased baud rates will double the distance at which network operators could transmit the same amount of capacity, allowing them to efficiently bridge cities on opposite sides of the Atlantic and Pacific oceans. Cost savings also come from needing fewer transceivers and without the addition of new frequency bands.
Sylvain Almonacil, Research Engineer at Nokia Bell Labs, said:
“With these higher baud rates, we can directly link most of the world’s continents with 800 Gbps of capacity over individual wavelengths. Previously, these distances were inconceivable for that capacity. Furthermore, we’re not resting on our achievement. This world record is the next step toward next-generation Terabit-per-second submarine transmissions over individual wavelengths.”
Put another way, network operators will continue to extract significantly more data capacity from existing subsea optical fibre cables, which will help to feed the ever-rising levels of demand for data from broadband ISPs and other service providers. This will in turn help to keep capacity costs down, since adding new cables to cope with demand is an extremely expensive option.
800Gb is plane stupid. I’ve only got 50MB and it does everything i need it to so i won’t be upgrading anytime soon. This fibre dream is going to end in tears
I assume that’s meant as a joke 🙂
I do wonder as FTTP take up increases whether contention ratios will come into play and slow individual users down if everyone is using lots of data at the same time?
it’s the same game ISPs have played for years. You don’t think Virgin Media (for example) can give every 1Gbit customer 1Gbit all at the same time do you? For one thing, they don’t have enough peering capacity to do that. For another, at some point the dark fibre carrying it all to the head end will get saturated as well. Same goes for Openreach. They couldn’t support everyone who has 1Gbit GPON all doing 1Gbit at the same time. A good network knows when to light up new fibres to carry the load. A bad ones doesn’t care and says meh contention. We have 10Gbit fibre at work, and I rarely see it do sustained traffic over 600mbit let alone 10Gbit, and that’s handling a reasonably large UK website as well as all us work-from-home types VPNing in.
Personally when it comes to contention I think it’s better to aggregate as much as possible at the first point of contention along the route. For example – 200 users sharing a 10Gbps backhaul might be better than 10 users sharing a 5Gbps backhaul. With a small amount of aggregation at the first choke point can potentially result in a postcode type of lottery, if you get unlucky and have a bunch of heavy users. Generally speaking, as far as I am aware, there aren’t really any contention issues upstream of the first point of aggregation, and it gets easier and easier as you approach the core because utilisation trends are extremely predictable and ISPs can design their network to always have a decent amount of overhead available.
200 is arguably still a low number, but will be about what you’d get on fttc for example.
Typical in modern fttp networks would be 1000+ at cabinets or much more at exchanges. You only really start to see a really smooth profile at around 10,000 users.
Contention ratios across networks as a whole have never been higher and everything seems to be working fine.
Mostly people are doing the same things just faster. No killer apps capable of eating a gigabit a second day in, day out for right now.
Real contention is at the server end.
Unless your a mega provider hooked into cdn’s etc your connectivity from server & size of whatever typically needs to be sent through your peer will limit the number of simultaneous connections you can support.
obviously the greater the bandwidth of the pipe the more people you can support but not just because of bandwidth but because each users interaction consumes an ever smaller slice of time so huge over saturation doesn’t result in horrendous wait times aka latency.
Media streamers obviously require the most bandwidth for concurrency with guaranteed latency, but they are geared up for that.
N – “Typical in modern fttp networks would be 1000+ at cabinets or much more at exchanges”
Isn’t the OR design for FTTP essentially however many users contended at the splitter node on the pole or draw pit and the amount of backhaul it has available for those 8? connections. Or am I missing something? And then obviously contention further up at the exchange and so forth.
Thats great, I can’t get 30mbs from BT @ in a busy town in the east coast of Scotland. No other network available.