Network Status

News & Information

URGENT SECURITY PATCH FOR 'ZENBLEED' VULNERABILITY: IMPORTANT SYSTEM UPDATES (Resolved)

Affecting Other - Cloud infrastracture | Priority - Critical


In response to the recent discovery of a critical hardware vulnerability known as 'Zenbleed', our dedicated security and engineering teams have successfully developed and deployed an urgent security fix. The application of this patch has required a temporary shutdown of systems to ensure the robustness of our security response.

Despite our best efforts to implement a solution that would not necessitate downtime, initial attempts using provided mitigation methods failed to address the issue adequately. Our penetration testing team confirmed vulnerability working within virtual machine environment. We used internally made tools to make sure that the data is safe and did not get exfiltrated. Faced with the paramount necessity to protect our systems and clients, we made the decision to proceed with the CPU microcode update which protects against the vulnerability.

Zenbleed is a critical vulnerability affecting CPU hardware components of our systems. Its potential impact could include unauthorized access, data breach, and disruption of services. Despite the severity of this threat, our team has been able to promptly react and undertake effective measures to mitigate the issue.

Please be assured that our company prioritizes the security of our systems and the protection of our clients' data above all else. During the system downtime, we made every effort to minimize disruptions to services and to ensure that the system updates were applied efficiently and effectively.

We apologize for any inconvenience caused by this unplanned system downtime. However, these decisive steps were necessary to protect the integrity of our services and our commitment to customer data security.

Our teams will continue to monitor the situation closely, making further adjustments as necessary to assure system stability and security. Regular updates on the situation will be provided as they become available.

We highly appreciate your understanding and cooperation during these critical times. We are confident that these actions will further strengthen our security infrastructure and maintain the trust you have placed in us.

For further information and any queries, please do not hesitate to contact us.

Useful links:

https://www.amd.com/en/resources/product-security/bulletin/amd-sb-7008.html

Date - 01-08-2023 17:30 - 01-08-2023 17:47

Last Updated - 01-08-2023 19:05

Upstream Disturbance (Resolved)

Affecting System - Networking | Priority - Low


One of our providers experiencing issues with fiber optics in several destinations.

We have rerouted the network to alternative path.

 

Following the massive fiber incident we experienced between Munich and Vienna, our customers connected at Eastern Europe or in transit from Eastern Europe to other locations may have experienced periods of higher latency or packet loss. The reason behind this was that while this massive outage was active, we experienced several other fiber incidents that reduced the number of available backbone paths to handle the traffic. We performed actions to mitigate the problem while it was ongoing, and we tried as well to bring up temporal capacity in the available paths.

As a learning organization, we understand this is a good opportunity to learn from this incident. Even if the amount and combination of fiber outages in the same region has been completely unusual, we will evolve the topology of the network in this region to improve its resiliency.

At a high level, the plan consists of:

-	Deploying an additional backbone route between Belgrade and Milano.
-	Adequate the capacity in the existing backbone paths to be able to handle the traffic when there are multiple fiber outages impacting several backbone paths.

Our goal is to have these projects completed  by the end of Q3 2023.

We would like to provide some additional details for each of the fiber outages we experienced:

Fiber outage Munich-Bratislava:
o	Start: 2023-07-08 03:46 UTC+2
o	End: 2023-07-11 22:04 UTC+2
o	RFO: Vandalism. Fire in a bridge in Munich area. 
Police and structural engineers blocked the access the bridge due to safety and health concerns. Access was only granted on Monday 10th July. Our provider had to lay a new 400 meters duct on both ends of the bridge. Solution is permanent.

Fiber cut  Hamburg-Bremen
o	Start: 2023-07-10 10:04 UTC+2
o	End: 2023-07-10 14:45 UTC+2
o	RFO: Excavation works in the highway A1

Fiber cut Bratislava-Bucharest
o	Start: 2023-07-10 18:05 UTC+2
o	End: 2023-07-10 22:24 UTC+2
o	RFO: Agricultural Machine

Fiber cut Warsaw-Bratislava
o	Start: 2023-07-11 16:36 UTC+2
o	End: 2023-07-11 21:20 UTC+2
o	RFO: Vandalism. Infrastructure damaged between Katowice and Zory (Poland).

All the above outages are now resolved, and our backbone capacity is fully operational.

 

Case 1 Bratislava-Budapest

We have experienced a high attenuation issue between Bratislava and Budapest. This has caused our backbone links to take errors, causing packet loss for the traffic traversing through this segment.

We have removed traffic in the affected segment, adjusted power values in our DWDM systems and restored the traffic.

The attenuation will be corrected by our vendor in a planned maintenance.

Case 2 Eastern Germany

UPDATE:

Fiber issues located in the area of Eastern Germany (Emstek-Bramsche) were restored and we also performed emergency works to get additional capacity.

Due to that your service should be working fine, without any packet loss or extra latency. 

We apologize for this inconvenience.

Thanks

============================================================
Regarding the outage between Munich and Bratislava, our vendor fiber team has been given clearance to begin work at the fiber damage bridge site. They are on route to begin repair. ETA and ETR will follow soon. Vendor has prioritized tubes/ fiber restoration plan to speed up restoration post splicing. ============================================================ An excavator is on-site and has started the earthworks. The splicing team is waiting for the earthworks to be completed. No ETR is currently available.
============================================================ Our fiber provider at Eastern Germany has located the fiber cut and is travelling to the point of cut to start the repair works. No ETR yet The outage between Munich and Bratislava remains active. This has been caused by a fire in the area of Munich. Police department has not released the area as of yet. We are working to divert traffic on emergency to a third backbone path. We expect to have the packet loss, if not solved, at least mitigated in the next 3 hours. Best regards ************************************************************ Customers with traffic in transit to or connected at Eastern Europe may be experiencing latency or packet loss, due to the combination of 3 different fiber outages in Europe, that are impacting 3 major backbone routes: The first outage is between Munich and Bratislava, and was caused by a fire. Police suspects that the fire was provoked, and they are investigating root causes, working together with infrastructure engineers that are assessing the area to ensure that repair works can be conducted safely. The other outages started earlier this morning, located in the are of Eastern Germany (Emstek-Bramsche). This outage is impacting 2 major backbone routes at he same time. Our fiber provider has started to investigate. Apologies for any inconvenience caused

Case 3 

Our provider confirms that the repair work has been completed.

Apologies for any inconvenience caused.
************************

The fiber cut in Munich is also resolved but works on site are continuing so the fibers will be at risk until all work is completed. We will inform you as soon as we have confirmation that all works are completed.
************************
The fiber cut in Poland is resolved, we are still awaiting confirmation from our provider that works are completed.

The capacity issues to some Eastern Europe hubs were resolved and your service should be working without packet loss.

For the Munich issue, our fiber provider informed that repairing works are still progressing but no ETR was provided.

We will inform you as soon as we have more information.

************************

Our fiber provider in Poland provided an ETR: 21:00 CEST. We will inform you as soon as we have more information. Thanks *********************************** Our fiber provider in Munich informs that works are continuing to lay the new duct. East side of the duct is prepared, they are working to locate the West side of the duct. Works are being slowed down due to the presence of a 110KV cable that requires special safe and health caution to perform the works. We will inform as soon as splicing has started ************************************************************ Due to a new outage in Poland, we are again having some capacity issues at some Eastern Europe hubs. The fiber outage between Munich and Vienna is still active. This was caused by a massive fire under a road bridge. The bridge structure has been damaged so telecom operators have not been able to access to the bridge until yesterday. Our fiber provider is currently deploying a new duct, and once done, they will splice the fibers at both ends of the bridge. Initial estimation is that this outage will be restored this evening The second outage contributing to the issue is located in Poland, impacting an alternative backbone path we were using to mitigate the packet loss. We will update you as soon as we have an ETR for any of the fiber outages Thank you for your patience ************************************************************ Our IP engineering team performed traffic engineering to balance traffic and mitigate the saturation. Your service/s should be working fine now, you may detect some extra latency but the packet loss should be resolved ****************** Due to an ongoing massive outage between Bratislava and Viena, we are having some packet loss in the alternative path. Due to that you may experience some packet loss and/or extra latency. Our IP Engineering team is currently working in some traffic engineering to mitigate the saturation until our fiber provider resolves the fiber cut. Our fiber provider informed that they are currently working to restore the fibers as soon as possible but no ETR was provided. Apologies for any inconvenience caused.

Date - 08-07-2023 08:10 - 12-07-2023 01:00

Last Updated - 12-07-2023 16:18

Network maintenance (Resolved)

Affecting Other - Network | Priority - High


Our technicians will upgrade our firewall cluster to improve throughput of the network.

Expected downtime 0-5min.

Date - 07-02-2023 00:00 - 12-07-2023 16:18

Last Updated - 06-02-2023 16:08

Network reconfiguration (Resolved)

Affecting Other - Network | Priority - Medium


2022.10.06 at 22:00 CET scheduled network reconfiguration which may result in slight downtime.

 

Date - 06-10-2022 10:00 - 06-02-2023 16:06

Last Updated - 06-10-2022 12:20

Node migration (Resolved)

Affecting Server - Virtualizor Cloud | Priority - High


Customers on node 1 will be migrated to other nodes from vz1.serverastra.com

Node vz1.serverastra.com will be taken down for maintenance.

Date - 16-07-2022 22:50 - 06-10-2022 12:35

Last Updated - 15-07-2022 23:51

Cloud issue (Resolved)

Affecting System - One of cloud nodes | Priority - Critical


A slight DDoS attack decreased accessibility of one of the nodes, we are mitigating the issue ASAP.

System is currently under heavy stress so we are working out ways to decrease the load.

Date - 15-07-2022 22:43 - 15-07-2022 23:47

Last Updated - 15-07-2022 23:46

Issues with cloud bandwidth (Resolved)

Affecting Other - Cloud bandwidth calculation | Priority - Critical


Issues have been detected in Stellar Cloud bandwidth monitoring.

We have resolved that issue by resetting traffic for users till the end of the month.

Date - 27-04-2022 18:04 - 27-04-2022 22:00

Last Updated - 27-04-2022 22:05

IRP system switch of providers (Resolved)

Affecting Other - Networking | Priority - Low


During this period our IRP system misfired on maintenance and disconnected one of the providers which resulted in packet loss and latency issues for at least 1 minute and 9 seconds. 

As usual you can check our status, details of our uptime and subscribe for updates at https://status.serverastra.com/

Date - 11-04-2022 23:00 - 11-04-2022 23:10

Last Updated - 12-04-2022 11:06

Issues with local traffic (Resolved)

Affecting Other - Network | Priority - Low


Due to cardinal changes to the network structure latency has been introduced in accessing local providers.

Resolution is scheduled for March 10th with restoration of BIX connectivity.

Postponed till April 30

Date - 12-12-2021 20:22 - 27-04-2022 22:06

Last Updated - 11-04-2022 11:52

DDoS attack (Resolved)

Affecting Other - Network | Priority - High


We're experiencing a DDoS attack, for now some services are heavily filtered to protect our network.

Date - 27-03-2022 11:48 - 27-03-2022 13:37

Last Updated - 27-03-2022 13:37

Datacenter transfer (Resolved)

Affecting Other - Network | Priority - High


Dear Customers,

We had to postpone the transfer from November to December due to technical issues.

December 09, 2021, at 22:00 CET point of presence Hungary, Budapest - we will start moving our equipment to our new datacenter space.

Due to the necessity for the physical movement of systems, we will have to disconnect the systems in a particular order for a downtime of at least 15 minutes per system.

Our chat system will be offline for transfer duration for support cases. You can use the support ticket system for escalation.

This transfer includes cloud systems, dedicated servers and networking equipment.

1. Network move - 15 minutes downtime for the propagation of new settings.

2. Systems move - 15-minute downtime per rack.

3. 1 hour - maximum acceptable downtime.

If your system remains offline for more than 15 minutes after the shutdown, please escalate the issue in the ticket system.

Estimated time of completion - December 10, 2021, at 03:00 CET

Thank you for understanding and patience

Date - 09-12-2021 22:00 - 22-02-2022 20:24

Last Updated - 09-12-2021 19:21

Renumbering of IPv6 to ServerAstra subnets (Resolved)

Affecting Other - IPv6 | Priority - High


Dear Customers,

Please be informed that due to future significant improvements to our services we are applying IPv6 renumbering from subnet 2001:4c48:2:8400::/56 to 2a10:c800:1::/48

We perform this operation to provide customers with /64 enduser subnet deployment per RIPE recommended IPv6 partitioning procedure.

This will also allow us to transfer our services to our own datacenter space in the near future which will significantly increase reliability and security of the deployment.

With this partitioning scheme we announce availability of expanded space for IPv6. You will be able to order IPv6 /56 and /48 subnets for medium (e.g. vps/vpn server) and large deployments (corporate networks).

The renumbering will commence within this month. Every affected customer will be informed personally via ticket and provided email with the new IPv6 subnet assignment.

Each customer will receive a subnet in a form of 2a10:c800:1:0-ffff::/64 out of which they can assign any IPv6. The designated static gateway for routing is 2a10:c800:1::1.

Stellar Cloud users will get the first IP of the subnet automatically added to their configuration and subnet assignment will appear in their cloud unit configuration.

To apply the Stellar Cloud unit configuration you need to power off and start your system to update external security settings.

The IPv6 address of our local resolver has been changed to 2a10:c800::100Please make sure to update resolver to a new one. The previous one 2001:4c48:2:8400::2 will no longer respond.

After 28-08-2021 the subnet 2001:4c48:2:8400::/56 will lose support and after 10-09-2021 it will be disconnected.

If you have any questions in connection with the above, please do not hesitate to contact us.

Thank you for your patience and understanding!

Date - 28-07-2021 00:00 - 13-10-2021 18:50

Last Updated - 29-07-2021 15:47

System migration (Resolved)

Affecting Other - VDS node cloud node 5 | Priority - High


Due to issues with the hardware on the platform the VDS node cloud node 5 is being migrated off to other systems.

The VDS systems will be shutdown and restarted on a new system after migration.

This does not affect Stellar Cloud or any other services of ServerAstra

Date - 08-03-2021 00:00 - 10-03-2021 13:32

Last Updated - 07-03-2021 11:37

Website issues (Resolved)

Affecting Other - Knowledgebase unavailable | Priority - Low


Currently the knowledgebase articles are not being rendered. We are trying to resolve the issue ASAP

Date - 17-09-2020 00:00 - 17-03-2023 02:12

Last Updated - 17-09-2020 17:27

Stellar Cloud issue with setup of new systems and rebuilding (Resolved)

Affecting Server - Virtualizor Cloud | Priority - Medium


During the creation and any rebuilt of the system the disk access has been severed due to differences in sector size. This issue has been mitigated.

Customers can enjoy our 512e disk support (512 logical and 4096 block sector size) or if they want faster speeds they can use 4096 native which directly translates to our nvme drives.

Date - 10-03-2020 10:49 - 11-03-2020 00:08

Last Updated - 11-03-2020 04:53

Stellar Cloud master node critical (Resolved)

Affecting Server - Virtualizor Cloud | Priority - Critical


After application of best known configuration on master node in Stellar Cloud the kernel block module produced a segfault and caused a collapse of a few mission critical recovery procedures.

We are investigating this issue, please stand by.

12:33 mq_deadline enforced for additional parallelisation of nvme disk access.

12:40 Segfault (segmentation fault) of blk_mq_deadline reported in logs and meltdown of segment migration processes began. This causes slow down of the system and eventual halting of all disk processes.

12:45 Sysops proceeds with manual commands to recover (reverting configuration and restoring access) but unfortunately the deadlock is not resolved.

12:55 System sent to gracefully reboot awaiting for VMs to shutdown.

13:40 System rebooted, we are proceeding with preparation for disk recovery. This requires turning off all virtual machines to prevent writes to take an actual image and move it to another drive

17:30 At current time no known safe solution found. Before Disaster recovery (restoring backups) we are going to try soft migration with one by one instance shutdown and restart on new drives. Please stand by

20:30 We are starting the recovery program. Due to this VPS won't be accessible for a period of up to 5 minutes (depends on the size of the LV). Control panel is disabled meanwhile so the migration is performed seamlessly.

21:55 Due to consistency checking and hashing migration takes on average 65 seconds of downtime per 10GiB. That leaves 22 minutes for Port systems. 11 minutes for Station. 6 minutes for Outpost, 3 minutes for Base and 1 minute for Camp.

02:20 All systems moved and control panel recovered. Performing final cleanup and closing the outage ticket

Customers have been provided with compensation of 1 week free additional time.

Date - 08-03-2020 12:33 - 11-03-2020 04:49

Last Updated - 11-03-2020 04:49

System maintenance (Resolved)

Affecting Server - Virtualizor Cloud | Priority - High


To improve the service Stellar Cloud we will perform hardware upgrades on the related node systems.
EPYC cloud servers will have to be shutdown with the instances for the period of maintenance.
This upgrade is required to migrate customer data safely to newer NVMe drives which will provide double the performance of the current system.

Please note that during upgrade maintenance any access to the virtualized instances and the control panel will be unavailable.

The period of this down time maintenance is 1 hour.

 

All time is CET (Central European Time)

07:40 Shutdown scheduled at 8:26 (it will take approximately 4 minutes to shutdown the VMs on each node).

08:26 more time required for upgrade extended for 1hr

10:30 extended for 20 minutes

10:50 awaiting vm boot

11:30 vms booted

Date - 08-03-2020 08:30 - 08-03-2020 12:13

Last Updated - 08-03-2020 12:13

Stellar Cloud outage (Resolved)

Affecting Server - Virtualizor Cloud | Priority - Critical


Investigating memory problems on Stellar Cloud node.

Date - 18-12-2019 03:01 - 18-12-2019 03:23

Last Updated - 18-12-2019 03:31

Planned Stellar Cloud upgrade (Resolved)

Affecting Server - Virtualizor Cloud | Priority - Critical


We will perform planned upgrades of the cloud system on 16.12.2019 at 22:00 CET. Due to the nature of the SecureVM we will have to shutdown the systems during upgrade as the upgrade procedure requires nodes reboot.

Please accept our apologies for the inconvenience. Thank you for your understanding and patience.

EDIT:

Upgrade resolved successfully.

Date - 16-12-2019 22:00 - 16-12-2019 23:23

Last Updated - 16-12-2019 23:23

10 minute downtime (Resolved)

Affecting System - Networking | Priority - Critical


Border switch reboot caused uplink failure with simultaneous backup uplink failure caused a 10 minute outage until paths have been resolved correctly. We have resolved this issue permanently by changing recovery strategies and replacing faulty equipment

Date - 13-12-2019 00:35 - 13-12-2019 22:27

Last Updated - 13-12-2019 22:27

Intermittent network issues (Resolved)

Affecting System - Networking | Priority - Critical


One of the switches serving some of the customers connections failed. The switch was replaced with a hot swapped one, re-cabling and issue identification took the most time but by 19:40 most of the connections have been recovered.

Date - 13-12-2019 17:15 - 13-12-2019 21:14

Last Updated - 13-12-2019 22:25

Security compromise (Resolved)

Affecting Other - KVM VPS Xeon | Priority - Critical


The VDS node vds-5 unfortunately has been compromised (via IPMI), we are shutting down the node and will move all clients to a new node with the last day backup file restored.
No other node or cluster has been compromised. Just in case we are renewing all crypto keys and shutting down the node for good (we have the capacity to run VPS on other systems) to wipe it and reset it clean.

Currently an internal audit has been commenced. There's no clear explanation to how the hack happened (the only one is someone got a way to circumvent IPMI security). We are working with the authorities on resolution.
Cluster logging unfortunately got overwhelmed with data at this moment and the reboot event got lost. We are going to put this on our highest priority to prevent such problems in the future.

On 28/08/2019 systems have been exposed to an hacking entity via compromised VPS Node vds-5.eu.azar-a.net.
The exposure happened on a hardware level via access to an accidentally available IPMI management of the host node.
Internal IPMI upgrade script left IPMI exposed to the web circumventing VLAN security.
The circumstances of the breach are still under investigation, however your system has been restored to last known good backup configuration.
IPMI had the latest upgrades installed and is not known to be vulnerable.
The IPMI password has been checked for leak exposure and had enough entropy to protect access.
As per our policy we are resetting and changing all static passwords and keys.
Due to the nature of the occurrence we strongly suggest you do the same or even reinstall/redeploy completely.
Even if your system has been encrypted, if you entered the password between 28/08/2019 07:00 and 09:30 (until we turned off the system) the password is no longer safe and the data may be compromised.

Technical Investigation:
Perpetrator accessed the system twice. Once on 27/08/2019 at around 17:50 however seeing a locked system decided not to get involved.
Next time on 28/08/2019 at around 07:00 the criminal accessed the system and hard reset it to boot into ISO/IMG/init. Within at least 15 minutes the hacker most probably had full (root level) access to the system.
At around 7:20 hacker booted the system back. At 8.20 our administrators have started investigating suspicious VPS Node reporting no logs on system, this coincided with customers tickets reporting reboots.
Unfortunately our internal logging system being overwhelmed during this period failed to send active notifications and our external logging system erroneously reported the node in question as up and running.

To prevent such occurrence in the future the following has been done:
- Immediately: All passwords have been reset.
- Immediately: All cluster keys have been reset.
- Immediately: Old node keys have been removed from all systems.
- Immediately: All IPMI devices manually checked for compliance and security.
- Immediately: Reported to Authorities.
- Scheduled 1d: External logging partners are going to be expanded to 2N+1 with our shadow nodes testing connectivity to our systems.
- Scheduled 1d: Internal logging system rework for extra notification capabilities.
- Scheduled 2d: All IPMI devices (including customers' to exclude possible new vulnerability exploit) will be immediately moved behind VPN connections only.
- Scheduled 7d: All internal systems pentested by our security team.

Data Protection Note: According to our Privacy Policy and General Terms and Conditions (II.VI./ Data Processing) we have reported this incident to NAIH (National Data Protection Authority). The incident scope is wide and potentially harmful.
Please contact dpo@serverastra.com for further information.

We apologize for the inconvenience and thank you for your patience and understanding!
If you have further questions, please do not hesitate to contact our support team.

Date - 27-08-2019 09:42 - 29-08-2019 00:24

Last Updated - 29-08-2019 07:12

Stellar Cloud network issue (Resolved)

Affecting Server - Virtualizor Cloud | Priority - Critical


Problems detected with Stellar systems. We are investigating the source.

Update: 1 cloud System has been recovered from NMI generated by an undocumented bug in PCIe system. We strongly suggest customers to check their file systems. On the bright side the recovery put newest performance and security patches to work which makes our services more secure and fast.

Date - 02-07-2019 18:36 - 02-07-2019 20:37

Last Updated - 02-07-2019 20:37

VPS Cluster Maintenance (Resolved)

Affecting System - KVM VPS | Priority - High


We predict a maximum of 1h long scheduled maintenance to improve performance in-between 20:00 and 23:00 on Tuesday the 4th of June on a VPS Cluster running KVM VPS servers.

During the period of maintenance some nodes and VPS can be shutdown and unavailable.

Date - 04-06-2019 20:00 - 04-06-2019 22:05

Last Updated - 03-06-2019 20:18

IPv6 subnet malfunction (Resolved)

Affecting Other - Stellar Cloud | Priority - High


We detected an issue with IPv6 subnet assignment. Specifically assigning a subnet will break the ipv6 connectivity of the Cloud instance.

The resolution is in the works

Date - 15-03-2019 19:59 - 03-06-2019 20:14

Last Updated - 03-06-2019 20:13

Network upgrade (Resolved)

Affecting Other - Network and Cloud | Priority - High


We inform our customers that in order to improve the quality and safety of ServerAstra Stellar Cloud and Dedicated servers platform (Budapest, Asztalos S. út 13. DC2) our technicians will upgrade the network subsystem.

The maintenance is scheduled to happen during:

2019-02-01 20:00 CET - 23:59 CET

Due to the redundant architecture of the network the maintenance will result in minimal outage of some of the systems (maximum of 5 minutes) due to reconfiguration of some of the protection systems.
The following types of systems are expected to have outages: EX91, EX101, EX111, IPMI, Stellar Cloud.
Other systems may experience intermittent downtime of 15 seconds during the reconfiguration events.
During the period of maintenance the support service will be paying special attention to the management of outages occurred and fault reports received.

If despite the safety measures, you experience any problem during the period of these works, please report it using the client's area ticket system on our website https://serverastra.com/billing/

Should the website be inaccessible you can contact us:
 Using our Social network profiles:
  - Twitter https://twitter.com/ServerAstra
  - Facebook https://www.facebook.com/serverastra
  - Google+ https://plus.google.com/+ServerAstracomBudapest

 Using phone:
  - Get a callback by sending the word "OUTAGE" by SMS to telephone number +36 30 502-0221

We kindly ask you to review your contact data in the client's area so you can be sure we will be able to identify you.
Thank you for patience and understanding

Date - 01-02-2019 20:00 - 02-02-2019 17:14

Last Updated - 31-01-2019 19:56

Problems with the cloud storage (Resolved)

Affecting System - Stellar Cloud | Priority - Critical


03:00 Backup software tries to snapshot wrong volume leading to full disk space usage.

11:22 At the moment cloud experiences problems due to error in backup software, we are performing emergency recovery of the system and apologize for the inconvenience.

12:53 Performing urgent shutdown to recover volumes.

13:33 Extensive recovery performed.

15:05 We continue monitoring and restoring systems.

16:05 All systems restored with fsck ran on servers which are not with high security level. In case you system has encryption or high-sec level you have to perform the fsck yourself (fsck -y /dev/vda1 usually suffices). In case you are running a non-linux OS like FreeBSD or Windows you need to perform the filesystem check on your fs.

For Windows:

chkdsk C: /f /r /x

For FreeBSD you have to boot to single user mode or turn on background filesystem check at boot to forced mode

For UFS:

fsck -y /dev/vtbd0pX

For ZFS:

zpool scrub poolname

16:23 We have opened a ticket with backup software vendor to resolve the issue. This thread is closed as restoration was marked as finished. If you have problems, issues or complaints regarding this disaster please report to our support team. We wish you a Happy New Year and great holidays.

Date - 01-01-2019 03:00 - 01-01-2019 16:26

Last Updated - 01-01-2019 16:26

Power maintenance (Resolved)

Affecting Other - Power supplies | Priority - Low


We inform our customers that in order to improve the quality and safety of ServerAstra Adatpark platform (Budapest, Asztalos S. út 13. DC2) our technicians will perform power supply maintenance tasks.

The scheduling of maintenance periods is as follows:
2019-01-22 06:00 CET - 18:00 CET

Due to the redundant architecture of the network the maintenance efforts will not result in any planned outage either for the single-supply or the multi-supply devices.
During the period of maintenance the support service will be paying special attention to the management of outages occurred and fault reports received in connection with the maintenance of the power supply facilities.

If despite the safety measures, you experience any problem during the period of these works, please report it using the client's area ticket system on our website https://serverastra.com/billing/
Should the website be inaccessible you can contact us:
 Using our Social network profiles:
  - Twitter https://twitter.com/ServerAstra
  - Facebook https://www.facebook.com/serverastra
  - Google+ https://plus.google.com/+ServerAstracomBudapest
 Using phone:
  - Get a callback by sending the word "OUTAGE" by SMS to telephone number +36 30 502-0221

We kindly ask you to review your contact data in the client's area so you can be sure we will be able to identify you.
Thank you for patience and understanding

Date - 22-01-2019 06:00 - 31-01-2019 19:57

Last Updated - 29-12-2018 12:46

Power maintenance (Resolved)

Affecting Other - Power mains | Priority - Low


We inform our customers that in order to improve the quality and safety of ServerAstra Adatpark platform (Budapest, Asztalos S. út 13. DC2) our technicians will perform power supply maintenance tasks.

The scheduling of maintenance periods is as follows:
2019-01-23 06:00 CET - 18:00 CET

Due to the redundant architecture of the network the maintenance efforts will not result in any planned outage either for the single-supply or the multi-supply devices.
During the period of maintenance the support service will be paying special attention to the management of outages occurred and fault reports received in connection with the maintenance of the power supply facilities.

If despite the safety measures, you experience any problem during the period of these works, please report it using the client's area ticket system on our website https://serverastra.com/billing/
Should the website be inaccessible you can contact us:
 Using our Social network profiles:
  - Twitter https://twitter.com/ServerAstra
  - Facebook https://www.facebook.com/serverastra
  - Google+ https://plus.google.com/+ServerAstracomBudapest
 Using phone:
  - Get a callback by sending the word "OUTAGE" by SMS to telephone number +36 30 502-0221

We kindly ask you to review your contact data in the client's area so you can be sure we will be able to identify you.
Thank you for patience and understanding

Date - 23-01-2019 06:00 - 31-01-2019 19:57

Last Updated - 29-12-2018 12:46

IPv6 intermittent problem (Resolved)

Affecting Other - Network / Core | Priority - High


Multiple reports of issues accessing IPv6 resources. Our team is investigating the problem

Date - 19-10-2018 07:22 - 19-10-2018 11:10

Last Updated - 19-10-2018 11:11

Maintenance works (Resolved)

Affecting Other - Network | Priority - Medium


Hereby we inform you and your staff members that in order to improve the quality and operational safety of ServerAstra Network Dataplex our technicians will perform IP-network maintenance tasks  
on October, 18. 2018 from 00.00 to October, 18. 2018 05.00 
Due to the redundant structure of the network the maintenance works will not result in scheduled service outage, but IP connections may be interrupted for a few seconds when changing the route.
The non redundant connections will be interrupted for a short period during the maintenance. 

Date - 18-10-2018 00:00 - 19-10-2018 11:12

Last Updated - 17-10-2018 00:09

Variable window security fix (Resolved)

Affecting Other - Linux, Windows | Priority - Critical


Due to recent discovery of a critical security flaw in Intel CPUs we prepare for urgent KPTI (also known as KAISER or MELTDOWN) patch for our VPS systems which use Intel.
Our VPS systems will be upgraded as soon as the patches hit the net.
That means the downtime schedule is not quite possible but we will do our best to inform our customers through Social Media.
We advise our customers running on Intel to update Linux and Windows systems under their control ASAP to prevent possible privelege escalation.

There is no window for this patch sequence. Upgrade is scheduled for the next 6 days but will be prolonged if necessary to match the kernel as close to the current as we can to prevent stability issues of the cloud.

Date - 04-01-2018 01:07 - 10-01-2018 20:23

Last Updated - 04-01-2018 01:19

Maintenance on the new cloud system (Resolved)

Affecting Other - VPS | Priority - High


To perform maintenance on the system we will perform reboot on node-1.cloud.budapest.hungary.beta.serverastra.com
This will restart the VPS on the server. You can check if your system is running on the affected system via VPS control panel

Date - 17-12-2017 22:00 - 17-12-2017 22:10

Last Updated - 04-01-2018 01:07

vds.1.eu.azar-a.net IO issues (Resolved)

Affecting Other - vds.1.eu.azar-a.net | Priority - Critical


We have experienced vds.1.eu.azar-a.net IO issues.
VPS 1 server is currently down and an investigation of the issues is ongoing.
Please standby, we will keep you updated.

UPDATE:
System has been recovered, hardware upgrade to be scheduled within 6 hours.
Please standby, we will keep you updated.

Affected services:

VPS1 hosted VPS servers, VPS panel

Date - 14-03-2017 10:40 - 14-03-2017 11:13

Last Updated - 14-03-2017 11:00

vds.1.eu.azar-a.net IO issues (Resolved)

Affecting Other - vds.1.eu.azar-a.net | Priority - Critical


We have experienced vds.1.eu.azar-a.net IO issues.
VPS 1 server is currently down and an investigation of the issues is ongoing.
Please standby, we will keep you updated.

UPDATE:
System has been recovered, hardware upgrade to be scheduled within 6 hours.
Please standby, we will keep you updated.

Affected services:

VPS1 hosted VPS servers, VPS panel

Date - 07-03-2017 19:00 - 07-03-2017 20:19

Last Updated - 07-03-2017 20:20

Instance servers upgrade (Resolved)

Affecting Other - Cloud maintenance | Priority - Critical


All VPS nodes in our cluster are going to be upgraded to ensure proper security, performance and quality of service.
Due to this, instances will be shutdown for at least 30 minutes and up to 1 hour.
We are performing necessary backups at the moment to ensure the restoration after upgrade.
If you have any other questions, please don't hesitate to enquire our Support Team.

Date - 01-03-2017 01:00 - 01-03-2017 05:13

Last Updated - 28-02-2017 20:37

System maintenance (Resolved)

Affecting Other - vds.1 | Priority - High


VDS node 1 has to be replaced ASAP.
Work starts at 23-12-2015 10:00

Date - 23-12-2015 10:00 - 22-09-2016 14:43

Last Updated - 23-12-2015 07:24

VPS node 1 disruption (Resolved)

Affecting Other - vds.1 | Priority - Critical


System malfunction detected at 05:17 23/12/2015.
By 07:05 system was fixed.

Date - 23-12-2015 05:17 - 23-12-2015 07:05

Last Updated - 23-12-2015 07:22

DDoS (Resolved)

Affecting Other - Network | Priority - Critical


DDoS occured limiting our network capabilites. The attack has been mitigated withing 15 minutes.

Date - 17-01-2014 00:00 - 17-01-2014 00:00

Last Updated - 27-04-2014 12:03

Power failure Circuit A (Resolved)

Affecting Other - Power | Priority - Critical


Our administrators reported power circuit A failure, some systems and partial network were affected. Reconnection completed as soon as the second circuit check has been completed.

Date - 27-04-2014 06:25 - 27-04-2014 10:25

Last Updated - 27-04-2014 12:01

DDoS (Resolved)

Affecting System - Network | Priority - Critical


32-34 Gbit/s UDP zero-length packet attack, currently mitigated.

 

Date - 17-10-2013 00:44 - 17-10-2013 01:27

Last Updated - 27-04-2014 12:01

DDoS (Resolved)

Affecting Other - Network | Priority - Critical


Multi-gigabit NDP DDoS attack reported on several subnets in our disposal. IP's and rogue ASN's were filtered.

Date - 02-03-2014 00:00 - 02-03-2014 00:00

Last Updated - 27-04-2014 10:46

DDoS (Resolved)

Affecting Other - Network | Priority - Critical


Multiple UDP DDoS Attacks starting at 04:00 and ending at 16:00. All targetted the internal network IP's of our infrastracture thus affecting overall network performance.

 

The problem has been resolved at the borders.

Date - 19-04-2014 00:00 - 19-04-2014 00:00

Last Updated - 27-04-2014 10:40

Power maintenance (Resolved)

Affecting Other - Power | Priority - Medium


Power maintenance is scheduled due to recent power failure, the downtime will be 15 minutes per system with 1 PSU. Systems with more than 1 PSU won't be affected.

Date - 04-05-2014 06:00 - 19-06-2015 02:17

Last Updated - 27-04-2014 10:33

Network outage (Resolved)

Affecting Other - Network | Priority - Critical


Network reconfiguration inbound from upstream broke the routing of our network. Our administrators manually set-up correct routes for the time being until 11.02.2014 night when the permanent fix will be applied.

Date - 10-02-2014 16:59 - 10-02-2014 17:10

Last Updated - 10-02-2014 17:16

Network outage (Resolved)

Affecting System - Networking | Priority - Critical


Network outage detected.

12Gbit/s DDoS attack has got our network routers down for 27 minutes. Our network has been protected with Azar-A Firewall system to withstand the attack. Some network degradation may be detected for another 6 hours.

Date - 16-06-2013 20:13 - 16-06-2013 21:59

Last Updated - 16-06-2013 21:59

Network maintenance (Resolved)

Affecting System - Internal Networking, Backup system | Priority - High


On June 15th of 2013 at 05:00 maintenance works will be performed on the power connectors in our Datacenter.

As per maintenance, servers with 1PSU will be briefly disconnected for up to 10 minutes to be rewired to another PDU line.

Also during this maintenance there will be a short (1 minute) disruption of network and backup service in our system.

If after the maintenance your system will be unreachable please contact our support team as soon as possible.

 

We apologize for the inconvinience.

Date - 15-06-2013 05:00 - 27-04-2014 10:33

Last Updated - 12-06-2013 09:40

Network maintenance (Resolved)

Affecting System - Network | Priority - High


Dear Customers,

 

We've scheduled an urgent update on 2013.04.15 at 00.00 UTC

The downtime will take 30 minutes as we upgrade our router to a new Firmware.

This scheduled upgrade will bring new features to our systems as well as secure the previous ones.

 

Kind Regards,

Azar-A Team

Date - 15-04-2013 00:00 - 15-04-2013 02:55

Last Updated - 14-04-2013 19:19

Data corruption prevention (Resolved)

Affecting Other - node1.azar-a.net | Priority - Critical


To prevent data corruption because of kernel messages KVM guests were restarted at vps1.azar-a.net

We apologize for the service interruption. Graceful shutdown has been commenced on working machines.

Date - 25-01-2013 00:15 - 25-01-2013 00:19

Last Updated - 25-01-2013 00:19

Reported vps system downtime (Resolved)

Affecting Other - vps1 node problem | Priority - Critical


We apologize for the vps1 node downtime.

Our system administrators fixed the problem in time.

Currently no issues are reported. Please check your virtual hard drives as the system had to be shutdown at emergency rate.

Date - 24-08-2011 20:58 - 24-08-2011 21:10

Last Updated - 24-08-2011 22:13