Previous incidents
Degraded Network Performance NYC & AMS
Resolved Oct 07 at 01:25am CEST
The issue seems to have been solved, we're continuing to monitor the situation.
3 previous updates
Network Issues AMS
Resolved Sep 20 at 11:50pm CEST
Full Status Report:
At 21:10, our Amsterdam (AMS) site was hit by multiple unusually large 3 Gpps (3 billion pps) DDoS attacks. This temporarily degraded mitigation, affecting customers using the “Always On” protection filter.
Between 21:10 and 22:03, we adjusted filtering configurations to reduce load on the filtering servers.
At 22:03, we deployed a final change. During a subsequent large attack, an unforeseen configuration issue caused packet filtering to revert to less efficient mechan...
2 previous updates
NYC high packet loss
Resolved Sep 19 at 10:25pm CEST
We've implemented a solution.
1 previous update
Packetloss AMS09
Resolved Sep 16 at 06:42pm CEST
It seems like a connection limit was reached, we've resolved the issue by increasing the limit.
1 previous update
Degraded Network Performance
Resolved Sep 09 at 09:41pm CEST
We've implemented a solution, we're already in the process of upgrading our protection capacity to prevent this in the future.
1 previous update
Partial Transit Reroute
Resolved Sep 01 at 07:51pm CEST
Between 18:18 and 18:20 CEST, we detected a partial outage affecting two transit ports on Edge Router 1 (AMS17). Our network automatically rerouted traffic around the affected ports. While this caused a brief reroute for a small portion of traffic, the majority of traffic remained unaffected.
Degraded Network Performance
Resolved Aug 28 at 11:29pm CEST
We've implemented a solution, we're already in the process of upgrading our protection capacity to prevent this in the future.
1 previous update
VPS outage
Resolved Aug 21 at 09:15pm CEST
Post-Incident Report: AMS VPS Cluster Outage
Date: 21st of August, 2025
Impact Period: ~3:00 PM – 4:08 PM CEST
At approximately 3:00 PM CEST, our AMS VPS cluster experienced an outage due to an unexpected loss of power on the A feed. Under normal conditions, the B feed should have automatically taken over, but this failover did not occur. Upon investigation by our on-site technician, we discovered that the B feed was overloaded. This issue was traced back to an error in the PDU reporting so...
4 previous updates
AMS 01 downtime.
Resolved Aug 08 at 02:05pm CEST
The node is back online, we are scheduling maintenance to replace the faulty stick.
2 previous updates