Our upstream provider has scheduled an critical maintenance window for Wednesday, July 1st beginning at This maintenance window is to update the servers to utilize the latest firmware and bios on our main boards and raid controllers. These updates are required to address various known issues with the previous versions.
They expect the actual downtime of each server to be approximately 30 minutes and this may occur at any time during the maintenance window. Less than 30 minutes Our upstream provider has reviewed and organized the technical procedures to minimize the impact on our service and senior technical staff will be overseeing this work for the duration of the maintenance window.
Our upstream provider has scheduled an emergency maintenance window for Wednesday, June 10th beginning at This maintenance is required in order to correct an issue with malfunctioning network appliance. They intend to correct this issue by moving the effected network segment to a different device.
This may effect connectivity for up to 30 minutes. We will be performing some upgrades to the billing system on Monday. This will not affect your websites, only access to the billing system. Toggle navigation.
Статус сети Новости и информация Портал Статус сервера. Intermittent Network Issues решено. Planned Network Maintenance решено. Discontinuation of server pl3 решено. Sending Mail Problem решено. Discontinuation of server pl2 решено. Server Discontinuation решено. Company Website Move решено.
Control Panel Upgrades решено. DNS Configuration Problem решено. Mail will not send to Yahoo or AOL addresses решено. Server Problems решено.
Hardware Upgrades решено. Support Email Issues решено. Server Migration решено. Server Migration to New Location решено.
Short Connectivity Loss решено.CoD4 Sever Hosting Tutorial Part 2 - Hosting CoD4x 1.8 /1.7a Dedicated Server
Server Maintenance решено. Non-Standard Maintenance решено. Email server spam blocked accidentally решено. Server Relocation решено. Processor Upgrade решено.
Manual update of name servers required решено. Cloud Maitenance - Minor Downtime решено. Hard Drive Upgrades решено. Scheduled Maintenance решено. Mail issues with Yahoo and Shaw решено. Mail Sending Issues решено. Standard Network Maintenance решено. Server Connectivity Issues решено. Billing Maitenance решено. Scheduled maintenance решено.
The server has been the target of a DDoS attack. Fro this reason it was nullrouted at datacenter. We have temporarily enabled DDoS protection. Due to DNS caching changes it can take few hours for service to become available for all website visitors. Due to failure of caching device, file system is working in degraded mode now. We are working on the issue. Everyting is done, resuming normal operations. We are currently experiencing a fault with one of our core routers. This may lead to brief connectivity issues.
Please bear with us. Based on a firmware bug router encountered an error.
Проверить сервер: Check host - онлайн мониторинг веб-сайта
MySQL server is restarting now, it will take few minutes to resume normal operations. MySQL server is down for emergency maintenance. Estimated downtime is 1 hour. Mysql server is down for software upgrade. Estimated downtime is within 15 minutes. We are performing emergency file system maintenane. Service interruptions may occur. Estimated completion time is within 2 hours.
The server is down for emergency filesystem maintenance. We are expecting to resume normal operations within one hour. The server is down due to DDoS attack. We are experiencing technical difficulties on MySQL server.
It will start working properly within 30 minutes. The services are down due to emergency file system maintenance. Estimated downtime is within two hours. Server is down for hard drive replacement. We expect maintenance to complete within 1 hour. Web server is down for cPanel ugrade. We expect upgrade process to complete in 1 hour. MySQL server is down for maintenance. It will be back online within 2 hours.
We are recovering filesystem after crash. Web server has been taken offline to reduce server load to allow raid resync faster. Mail, FTP and Cpanel are online and accessible.
Anticipated downtime is 5 hours. DDoS attack is running against the server customers. Hard drive failure occured on the server customers2. The server is experiencing hard drive problems. One of hard disks has crashed. Currently we are restoring contents from backup. Due to DDoS attack we have set up temporary routing through firewall on another server. It will affect FTP and email availability. Everything will be reverted back to normal as soon as attack is mitigated.
Maintenance is being performed on the server. ETA is 2 hours. Due to a hardware crash the server is down. We are working to solve the issue. We are performing update of cPanel on the server. Access to control panel may be limited at the time of update. File system maintenance is being performed on the server. Websites will be unavailable during this process. Accept our apologies for inconveniences. Historic Updates - for information purposes only, please refer to the updates above for the most recent info.
Over the past 24 hours there has been a widespread DDOS campaign targetting the popular CMS script Wordpress, inparticular the file used to log into the admin area of the script; wp-login. Large numbers of IP addresses from across the world have been attacking any files they can find, resulting in massive issues for web hosts and the stability of web servers. Whilst we have done our best to manage the situation, a number of our servers have been hit hard, some have even been taken offline by the attack.
To get everything back online on the servers that have been hit the hardest at the time of writing these are the servers HermesKipling and Wilde we have had to take the only option left that was to available to us, and that is to disable access to all wp-login.
We understand that this will mean a small number of genuine users will be locked out of the server for a short period 60 minsbut to put it into context, this measure has also allowed us to block malicious IP addresses that were causing the issues, at a rate of around 1, per minute. So you can see the sort of numbers we are having to deal with across multiple servers. Your IP address - this can be found at www. Also, if you want more details on the nature, and the scale of the attack, you can see how the rest of the web hosting community is dealing with the issue in this forum thread.
The Hermes server has just been hit by the wp-login. Please see the "Wilde" network issue down the page for full details on the issue. The httpd service on this server has gone down, causing websites on the server to become inaccessible.
We are aware of the issue and are working on it. We are now seeing the wp-login. More details can be found here. We are still working towards a solution on Wilde and really appreciate your patience on this matter. We are aware of this issue and are currently working to get the service back online. We have brought httpd back online and the server is stable. We will continue to monitor the server for the next few hours.
The purpose of this maintenance is to optimize the network configuration. The servers which will be involved in the scheduled maintenance are: You should not experience downtime for longer than 5 minutes.
Nash server is down due to a network config error and we are working on getting it back up Update: Update 3: The server kraken. We are now facing a network issue with the server. Our network engineers are working on this. This should be fixed asap. We will keep you updated. A hacked account on the server has flooded the mail queue and sent it unresponsive.
Thankyou for your patience. The server is back up after a reboot and FSCK, apologies for any inconvenience caused. We do not anticipate that you will experience any downtime and there should be no interruptions in your service. The server has gone unresponsive, we are checking it and will get it back up ASAP. The server is back online, we have temporarily stopped the httpd and mysql services to keep the load down whilst we investigate what caused the server to go down.
We have re-enabled http and mysql on the server, we found an abusive user on the server which we believe caused the server to become overloaded. The user has been suspended and load has gone back to normal. Unfortunately we are facing a network issue with our shared server hermes. We hope that the issue will be resolved as soon as possible.
We really do apologize for the inconvenience this might have caused you. Further updates will be notified here. We are currenlt facing a network issue with our shared server shaw. Our network engineers are working on this and this should be fixed asap. We have contacted the data center and will update when we hear more details. So total downtime should be around 2 hours. Thanks for your continued support. The network is experiencing issues again, onsite techs are already working on it.
The VPS node kraken. The server wilde. We will shortly be rebooting the server to complete a Kernal upgrade on kipling. An abusive account on the server was sending large amounts of spam, overloading the server. The account has been suspended and the server load is back to normal. The server hermes. We have issued a reboot request.
Server is back up after a reboot. We will keep a close eye on it for the next few hours. We have discovered a corrupt memory module in the server, so we will be taking it down at 1am EST on October 2nd to replace the corrupt module. Downtime is expected to be between 60 - minutes. Total downtime was between 15 minutes and 1 hour depending on the individual VPS.
The server is back up after a reboot, we will keep an eye on it for the next few hours. One of the drives in the RAID array is showing signs of deteriation, so we are going to replace it to be on the safe side. We have found 2 bad drives in the RAID array, we are replacing them 1 by 1.
During this time you may experience sub-optimal performance on the server. The 1st drive has been replaced and the RAID rebuild completed, we will now proceed with the 2nd drive. One of the disks in the RAID array is failing, which is causing instability on the server. We have contacted the DC to get the bad drive replaced. To complete the kernal upgrades each server will need to be rebooted, which will cause sites and email to go offline until the server has booted up in the new kernal.
Downtime should be no more than a few minutes, unless we run into issues. The servers will be upgraded one by one and we will be starting work at 3. We are not able to access the server via IPMI either, so we are having to escalate the issue to the datacenter, we will update this page when we get more info. We have resolved the issue with BIND and have reverted back to it, please open a support ticket if you have any issues.
The server baracus. The nameservers on the server hermes. We are working on it. We will keep this page updated when we receive further information. We found that an automated named. We have reverted to a backup of the named. Servers in the Atlanta datacenter were unavailable for a few minutes earlier this evening when the data was under attack from a massive DDOS.
This has been nullified and the network is accessible again, apologies for any inconvinience caused. This should now be resolved, please let us know if it persists.
The DC is checking on this now and we will update you when we get further information. Drive replacement and RAID rebuild is now completed.
This has been pushed back to the early hours of Sunday Morning 15th April. The server will be taken down to replace the drive and when back online the RAID array will be rebuilt, which will cause sub-optimum performance until the RAID rebuild has been completed.
One of our upstream providers cogent is doing some maintenance on their circuit April 14th. Start time: The official cause of the outage we received from the DC was:. The datacenter have provided us with the following information regarding the Atlanta network outage experienced on May 5th:.
Thus we had to close the layer 2 link to Cogent. They anticipated the completion of the maintenance by 7: Update, Thurs May 5th: The network issue reappeared for around 45 mins this morning meaning we were unable to access our client area to answer tickets.
Updates were posted on our Facebook and Twitter pages. Apologies for this unacceptable Atlanta network performance of the past few days, unfortunately it is out of our control and we are at the mercy of the datacentre, we will be taking it up with their management to see if we can get any further updates for you. This is mainly related to not-fully propagated routes across the service providers around the world. Thus if a packet goes through the network which does not have updated records, it loops until the TTL of this packet is expired.
We are working on this issue along with the DC techs and will try to get it resolved as quick as possible. The updates will follow on our forums. The network loop is noticed through nLayer, Tinet. That means if packets from your end-devices are routed through these carriers, the packets will be looped until TTL of the packet is expired. We still work along with the DC techs to get this resolved but everything is fine inside our network.
We just need the routes to get fully propagated to other service providers around the world. We are experiencing sporadic outages due to a network issue at the Atlanta Datacenter. This appear to be effecting a handful of servers and is causing intermettent connections.
The status updates provided by the DC indicate the issue is caused by a routing loop effecting multiple IP ranges, they are working on it at present. So we will post updates here as and when we get them.
This is only effecting servers located in Atlanta, US. The following servers will have their Kernals updated in the early hours of Wednesday 21st March. Hyrule came back online earlier today after a hard reboot, apologies for any inconvinience caused. The servers will be rebooted to complete the update, so will be down for 10 - 20 minutes whilst they boot back up.
We are currently upgrading the Kernals on the following servers, each server will need to be rebooted to complete the upgrade. Downtime will be 10 - 20 minutes per server:. We will be migrating data on the server nash.
The migration has now been completed, please let us know if you have any issues. The server is currently running sluggishly due to a large number of MySQL queries in "locked" status, causing high memory usage. We are working on this now. Load has been brought under control and sites are loading normally. Please let us know if you have any further issues. MySQL will be upgraded to version 5. The PHP rebuild will take no longer than 1 hour.
Load on this server is very high, causing it to be very sluggish, we are checking it now. We have temporarily stopped httpd on the server to enable us to investigate what is causing the load. We are keeping httpd offline for the moment as it will enable us to work faster on the server, without experiencing the sluggishness. Server is now back online, please open a support ticket if you have any issues. We will shortly be taking the server down to replace a failing drive in the RAID array.
Once the drive is replaced, the RAID array will be rebuilt, this is a resource intensive process and may result in a bit of sluggishness until it has been completed. We have the network maintenance scheduled between Maintenance will be performed on hardware carrying some Atlanta upstream connections on Friday, December 30th. Downtime is not anticipated, but some sub-optimal routing may be experienced as traffic is shifted between providers.
Updates to follow. The server is now back online, please open a ticket if you are still facing issues. The server will be rebooted at 2. This update is needed to resolve the performance issues you have been experiencing lately - one of the drives is being recognized as an IDE drive rather than SATA which is causing a large IO bottleneck, the kernal upgrade will fix this issue. The server is up but running very slowly, we are working on it.
The cPanel backup script was causing the load issues, we are manually killing off the proccesses. Load is now coming down, if you give it 10 minutes or so everything should be back to normal. We will be rebooting the server in the early hours of Thursday November 3rd to boot it into the CloudLinux kernal. Downtime will be 10 - 30 minutes. We will try and get them to provide us with KVM access so we can work on the server ourselves.
There was a problem with grub on the server. The reinstallation of grub resolved the part of the problem. Grub could not find the kernel files which was the second part of the problem which had to be fixed as well. As it was being done through KVM remotely, it took a bit longer to rectify the issue. Now the server is up but we will need to change the network settings to bring it back on the network as it was connected to PXE VLAN to get it loaded on rescue image.
Server is now back online and sites accessible, please let us know if you continue to have any issues. Final Update: All sites should now be back online, if you are still having problems please can you open a support ticket and we will check it out for you.
Apologies for the extended downtime, we will send an incident report to all affected Clients in the next couple of days. We will update this report with more info as we get it. We have found an issue with one of the drives, so we are rebooting the server and forcing a FSCK.
Статус сервера - CW3 Web Hosting
During this time the server will be offline, the FSCK can take anything from 30 mins to a number of hours depending how much data there is to check. We will update you with more info when available. The RAID card is failing to be detected, we are replacing it with a new one.
The main drive is completely dead and cannot be found by the server. We have replaced it and we are installing the OS on it. Once that is done we will install cPanel and we will restore the data from the backup drive. Server is back online, we are now proceeding to install cpanel on the server.
Статус сервера - Hosterbyte Hosting & Domain
Please hold on. As there are around accounts to be copied over and the size of data to be copied is high, it may take a few more hours. We estimate there is around GB left to be copied.
The sync is almost complete, accounts are being syncd in order and we are currently on the letter "Y". Once the sync is complete we will repair any permission errors and that should then see sites come back online. The sync has been completed and some sites back online. Others showing either an Internal Server error, or a Forbidden message will need file permissions fixing. We are currently running a script that will do this, like the sync, it will run through the accounts in alphabetical order.
All accounts with a cPanel username starting with A - D should now be back online. Still having issues with the permissions, we are working on this and will update you soon. We have been notified by the datacenter that there is a scheduled network maintenance that will be carried out Further details follow: Maintenance Notification: Atlanta Time: Upstream scheduled maintenance Impacted services: We have been informed of scheduled maintenance that will briefly impact our Cogent upstream connection on the Silver network in Atlanta on Sunday, October 30th.
During this time period customers on the Silver network may experience some sub-optimal routing as traffic shifts to other providers. However, changes need to be made to the Network Switch and the Engineers will be carrying this out shortly. This should resolve the issues users have been facing over the past few days, downtime is expected to be 2 - 4 hours. We will get the server back online and then do more checks at off peak hours.
We are aware that there are issues on Wilde server and have reported this to the data center. We will update when we have more information. We are aware of performance issues on the Wilde server, causing websites to load very slowy. We are aware of the issue and are checking it now. We are aware that there is an issue on Wilde server we are in touch with the data center to fix this.
We wil update when we have more information. We are aware that there is an issue on Nash server and have contacted the Data Center for more information and will up-date when we have more information. We will be rebooted all of our shared servers onto a new Kernal in the coming days, so during the reboot websites will be unavailable.
Maintenance will be performed on some of our core hardware on Thursday, July 21st, requiring a restart of this hardware. This will impact traffic passing through several providers causing traffic to reroute through other upstream connections.
It will also impact traffic passing through nLayer network, causing rerouting on this network as well. Other upstream will not be impacted. Some disruption may be experienced during this maintenance as rerouting takes effect. Work on this issue is scheduled between We are aware that there are currently problems on the UK Server, Nash. We have contacted the data center about this for more information and will update as soon as we receive the info.
Thank you for your patience. This appears to be due to a NFS being down and as such cPanel is hanging when trying to connect to it. We have opened a ticket with the datacenter about this and should have it resolved soon. If you are still unable to access please clear your cache, or try accessing from a different browser.
If you continue to have issues please open a support ticket. We need to recalculate the quota on the servers BullseyeGoronHyrule and Shaw as they are showing incorrect disk usage data. We are going to be doing this during off peak hours; 1am EST on Saturday 26th Feb, and downtime for each server should be no more than 30 minutes.
It would seem that there is a problem with the server, Nash, currently. We are in touch with the data centre to try and ascertain the problem and to get an estimated time of resolution. The Data Centre became aware of power issues affecting UK servers. The power has now been restored and the servers are now coming back online. We will keep you updated here when Nash is fully powered and retored.
The quota on the Goronserver has become damaged and needs to be manually rebuilt. During this time websites and email will be down. The quota on the Hermes server has become damaged and needs to be manually rebuilt. We are currently working on fixing the issue. To stop this from happening to us and you! The servers usually only take a couple of minutes to boot back up, but can take up to 1 hour in some cases.
Some users have opened tickets reporting websites being unavailable and connections dropping out, this is only effecting certain users, so if this is effecting you, could we please ask you to do the following on your computer, and then send the results to our technical support department so we can track down the issue:. To perform a tracert on a Windows machine: During the reboot sites will be unavailable, it should be for more than 20 - 30 minutes. We are going to be moving the server wilde.
There is expected to around 1 hour of downtime for sites and email on the server wilde. This has now been completed, we are now going to tweak the firewall settings on the server.
We are currently having a few issues with our credit card processing system, so we are unable to process card payments. We are aware of the issue and are working with the gateway to get it resolved. We are aware that at around 7pm GMT, clients have been experiencing loading issues when logging into their cPanel, only half of the page will load, if that. We are aware of the problem and are currently working to get this fixed. We will update this page when we have more information. This has now been resolved, please contact us if you face any further issues.
Sites are currently not resolving correctly, we are aware of this and are currently working on the server. There was a firewall misconfiguration on the server. Once all the zones have been re-loaded the sites will be back online. All sites are now back online, please let us know if you have any further issues. Status Update 1: Status Update 2: We have moved all sites back to the old servers whilst we sort out the routing issues on the IP blocks, if you still see a "Site is being moved" page, you can either edit your.
Status Update 3: The IP routing issue has been resolved and we are now continuing with the migration, apologies for any inconvenience this has caused you so far. Status Update 4: A small number of scripts are having problems with blank pages and depreciated functions, this is due to the PHP version used on the new servers, we are currently manually compiling PHP and downgrading it back to the version we had on the old servers - During this time you may encounter a few " internal server error" messages when visiting your sites, this is temporary and will be fixed one the recompile is complete.
Status Update 5: The migration is now complete for all servers except Bullseye which is still transferring. If you find you are missing any files or websites then please let us know ASAP and we will re-sync everything for you. We are currently installing RVSitebuilder on all the servers and will then proceed with the Fantastico installation, so both of the addons should be installed today. During the next few days, the following servers will be migrated to new hardware in a new data center:.
We are also utilizing an external backup system that will provide us not only with account level backups, but also full file system backups so we can get back up and running much quicker if any disaster occurs!
The geographical location of the new data center is in Florida, so any of you that are based in the Florida area may even be able to see the new machines from your bedroom window. We will send you all an email when the migration is complete, but until then, please let us know if you have any problems. This server migration does not effect clients hosted on our UK servers or Dedicated server clients.
We have suspended the user and freed up some space. The MySQL problems were due to databases becoming corrupt when there was no more disk space for them to work in, if you still face problems please let us know. A number of users are reporting "Cannot connect to Database" errors on their database driven websites, we are aware of the problem and are currently investigating.
After setting the PTR record for the server and tweaking the hostname, Apache is refusing to restart and as such sites are not resolving, we are working on the problem and will have everything back up ASAP.
The Hyrule server has become un-responsive, we are currently working on the server and will post status updates on this page. Sites on the server nash. This has been resolved, please let us know if you have any further issues.
The server hyrule. We are going to reboot the server and will update this thread when we have more information. The server is now back online after a reboot, we are investigating to see what caused the issue. We are going to be taking the server down to replace a drive, during this time the D9 Hosting websites will be unavailable. The server is used for the D9 client area, no customer websites will be effected.