Driven by high-profile cyber attacks and data losses, disaster recovery has become common parlance for businesses all over the world. But this hasn’t always been the case. Disaster recovery has undergone decades of evolution to reach its current state and there is plenty of development still to come.

1970s
The rise of digital technologies also led to the rise of technological failures. Prior to this, the majority of businesses held paper records, which although susceptible to fire and theft, didn’t depend on reliable IT infrastructure. As businesses began to embrace the mobility and storage benefits of digital tech, they became more aware of the potential disruption caused by technology downtime. The 1970s saw the emergence of the first dedicated disaster recovery firms.
These early firms came in three forms: hot, warm and cold sites. Hot sites duplicate a company’s entire infrastructure, allowing them to continue working immediately when disaster hits. Understandably, however, these sites are extremely expensive. Warm sites, on the other hand, only allow some of the core processes to be resumed immediately. Cold sites do not allow the immediate resumption of any services, but they do provide an alternative space in the event of a disaster striking the main office.
1980s
Regulations are introduced in the US in 1983 stipulating that national banks must have a testable backup plan. Other industry verticals soon followed suit, driving further growth within disaster recovery businesses.
[easy-tweet tweet=”Disaster recovery has undergone decades of evolution to reach its current state” hashtags=”DRaaS”]
1990s
The development of three-tier architecture separated data from the application layer and user interface. This made data maintenance and backup a far easier process.                     
2000s
The 11th September attacks on the World Trade Centres has a profound impact on disaster recovery strategy both in the US and abroad. Following the

View Entire Article on ComparetheCloud.com