Network operator CityFibre has apologised to their business connectivity users in Leeds (Yorkshire, England) after their attempt to replace an old 288f fibre joint cable, which was due to be upgraded to 432f, resulted in a “significant number” of customers (e.g. Dark Fibre and Ethernet services) being hit by connectivity problems.
The problems, which flowed from pre-planned work (CHG0078467), appear to have begun around the start of July – after the original work over-ran by several days and related faults were later picked up under a different code (INC0151050). During this period, a multitude of customers complained of protracted connectivity problems and a lack of clarity from CityFibre on how long it would take to fully resolve.
The issue didn’t impact all of their customers in the area and the operator was able to get other circuits back online within a few short days (most of the impacted services were back online by 11th July), but sadly some have been left without working Dark Fibre and other links for over two weeks. Such protracted outages of high-capacity business lines are a serious issue and one that has now promoted an apology from the operator.
Advertisement
According to CityFibre’s customer notice, “Firstly, we want to say sorry. This has been a major inconvenience and frustration for you and your customers … Whilst we have successfully carried out over 700 maintenance activities this year within the network, on this occasion there were undoubted mistakes were made by our outsource engineering supply chain during the complex upgrade, which compounded the issues we faced.”
In addition to this, the operator said that a cable was also damaged by a third party during the service restoration process, which further delayed their repairs. Difficulties in repairing the legacy network were then encountered due to incomplete fibre records on previously acquisitioned networks (we think the core route might have been one that previously belonged to KCOM, until 2015).
CityFibre are now conducting a “thorough review” of their current change and engineering processes, which will hopefully help them to identify and implement areas for improvement. “We will now be entering into a deep consultation with our supply chain partner and internal teams to fully identify every learning opportunity,” said the operator. Further details on the incident are likely to be revealed as part of their final Post Incident Review, which is due in the next few weeks.
I fear you’ve moved Leeds from the West Riding to the South one there…
Oops 🙂
Just as well you’re a distance from Leeds, Mark. Folks there don’t take too kindly to being associated with the South Riding. 😀
There isn’t a South Riding and it would be a farthing if there was.
No surprise really, Cityfibre’s Facebook page contains hundreds of comments complaining about their installation work and poor communication.
Maybe but these are DF and Leased Line customers.. So their SLA’s will have been hammered and CF will have a deep bill to pay out too. Kellys are doing a lot of CF work so maybe it was them.
“Maybe but these are DF and Leased Line customers.. So their SLA’s will have been hammered and CF will have a deep bill to pay out too.”
Except it won’t. The SLAs pay out nothing like what they bring in – SLAs are more a token thing than a meaningful rebate – and ultimately don’t cover the real cost of disruption.
I guess I am lucky then – but then again BT are the best. 🙂
Blaming an outsourced engineering team is not really the way to go, they had ample time to realise that something had gone catastrophically wrong and deploy their own staff to the location to resolve.
True but I am not sure where these CF staff you speak of are? Most work is done by kellys and other companies. Until I had an accident which put me in a Wheelchair in March last year I was a Kelly’s Engineer working a CF contract.
Most work is done by contractors rather than direct employees, namely because you have to pay employees to sit and drink tea while having them on the books vs paying a company to deal with all of that and not having to sack people looks better.
Sorry to hear you’re wheelchair bound now Charles, should it be temporary I wish you a speedy recovery. If not, hopefully you can manage to adapt well to it.
As a current CF employee who has worked on the Metro network (that is what is being discussed here) I can assure you that all ERS, maintenance and network upgrades are NOT carried out by Kelly’s. They have absolutely zero input into the fibre network other than new installations. I do know which contractors will have carried out this work and the issues will lie with the unreliable and poor network records CF hold and use, and lack of labeling in the UG network rather than any issues with their quality of work. Network records are going to be the downfall of many fibre providers as they are in my experience very poor across the industry
Of course, it must be someone else’s fault not Cityfibre’s…
Come on pull the other one, please don’t insult our intelligence.
So Cityfibre apologises but then a Cityfibre employee comes along and says’oh no it’s not our fault’.
Pretty sure they put the blame on CityFibre for carrying out disruptive work while having poor records and not labelling fibres properly, not the contractors.
CityFibre made the call to carry out the work and decided on the methodology despite incomplete records and uncertainty on ability to complete the work without issues they seem to be saying.
‘A frustrated Employee’ IS blaming CityFibre.. for poor record keeping.
They’re saying it isn’t Kellys, which other comments are suggesting. CF obviously use more than one contractor and apparently not Kellys for emergency Fibre work.
I think some people need to give their glasses a polish – I DID blame CF
This isn’t a rare outage. We’ve had a similar issue with a connection in Bradford prior to this, where the client was offline for a week due to repair work to fix “Historic Issues”.
CF told us the Leeds repair was on a smaller scale to Bradford which doesn’t seem to be the case due to the lengthy outage. The Leeds repair was originally scheduled for May that was aborted last minute.
And this is last week’s news. As CF has a major, unexplained outage in Leeds again today.
And let’s not forget its barely two months ago they sliced through Virgins fibre route casing three days of outages on Virgins core, as well as knock ons for Virgin backhaul users including Talktalk and CityFibre themselves who took their own customers out in the North East as they were backhauled via VM fibre they’d cut.
We used to have a lot of respect for Kcom and were a larger wholesaler of theirs, in the days when a planned maintenance window for a wavelength received a phone call before the service was taken down for a card infil in the RoadM chassis and another call 15mins later to check it was back up.
Meanwhile under the guise of CF, our remaining Kcom waves were taken down for three days as they’d decided splicing needed completing to bring the Milton Keynes new build into service and rollox to those paying thousands a month for commercial services, apparently bringing online an area to bosh out 30 quid a month domestic services was more important.
And that my dear readers sums up the downturn in what was once an exceedingly well managed infra.