System operational
Maintenance
Partial decomission and migration of RT-QFX10K-FKT

Starting early August, we'll begin with the migration of RT-QFX10K-FKT(QFX10008) at Equinix FR7.

The migration will happen in multiple stages; We'll migrate transit and peering ports over the course of a few weeks.

RT-PTX10K-FKT3 will be the replacement for RT-QFX10K-FKT.

FKT3 is our new PoP at Equinix FR5.

FR5 features better a lower ambient temperature and easier shipment/delivery handling.

We'll also deploy a new core router stack at Equinix FR5. Both RT-PTX10K-FKT3(Equinix FR5) and RT-QFX10K-FKT2(Equinix FR8) will be connected to this new stack.

This will increase the redundancy for almost all customers since L3 interfaces and BGP sessions will be terminated on rt-qfx5k-fkt3.

We will no longer terminate new customer BGP sessions on our edge routers.

We will notify all customers affected by the migration at least two weeks in advance.

Partial decomission and migration FR7 PoP

Starting August 31st, 2025, we will begin the decommissioning of our Equinix FR7 PoP. The migration is expected to be completed by the end of the year.

FR7 is currently limited in terms of available power, and ongoing rack availability has become a challenge. In contrast, FR5 offers a lower ambient temperature and significantly improved logistics for equipment deliveries.

Equinix increases prices across all services by 5% annually. Until now, we have absorbed these increases and not passed them on to our customers. However, due to the current power constraints at FR7, a price adjustment is expected by the end of this year.

To ensure long-term service stability and continued growth, we’ve made the decision to move operations to Equinix FR5.

We understand that this change may come unexpectedly for some customers. Customers using L3-only services, including colocation customers using BGP only, will be migrated to Equinix FR5 (FKT3) by September 2025.

The following racks will be migrated as part of this process:

  • Rack 0328

  • Rack 1602

  • Rack 0702

We will notify all customers affected by the migration at least two weeks in advance.

Past Incidents

1st June 2024

q10k8 BGP Flap

While migrating one of our upstream providers, the mentioned device accidentally leaked too many routes to another upstream, causing a disruption of the BGP sessions.

We restored connectivity via ZET.net and a few minutes later via aurologic, again.

  • Post mortem about this incident and why it took so long to restore connectivity:

    A little bit of backstory:

    On May 31st, we started the migration to q10k8(the new edge router at Equinix). Everything went quite well and we were able to migrate almost all prefixes that we advertise to the DFZ without any interruption. We had a few DNS issues along the way because of the old QFX5100 stack, but these were resolved quite quickly.

    At around 3:30 PM CET on June 1st, the migration of all prefixes from ear.fr7 -> to q10k8 was complete. We just had to make some minor config changes and some equipment maintenance onsite.

    This also involved migrating one of our transit providers(ZET.net) to 100G. At 09:23 PM on June 2nd, we had everything ready to re-advertise all prefixes to ZET and we committed the configuration. However, we had an error in our export policy towards another upstream provider, which led to the redistribution of too many prefixes towards the mentioned upstream provider, causing the BGP neighbour on the other side to go into "Idle" state(MaxPath).

    The error was caused by two rollbacks on the router, missing an "from community" statement.

    At this time, we were not advertising any routes to ZET, so we had to

    1. gain access to q10k8 first
    2. Restore the connectivity via ZET

    However, we thought that the active routing engine might have had some kind of OOM(out of memory) at that time, so we rebooted it.

    This took most of the 33 minutes. As soon as we realized that something was wrong with the one particular session, we restored the connectivity via ZET and shortly after via aurologic again.

    As soon as everything was back up, we fixed the configuration issue.

    Will this incident happen again? Very unlikely. Such an event can only happen with multiple factors involved, and unfortunately, this was the case.

  • 31st May 2024

    No incidents reported

    30th May 2024

    No incidents reported

    29th May 2024

    No incidents reported

    28th May 2024

    No incidents reported

    27th May 2024

    No incidents reported

    26th May 2024

    No incidents reported