The Royal Bank of Scotland (RBS) is a global financial player with a balance sheet the size of the entire UK. When you’re that big, all eyes are on you.
Usually, the attention isn’t cause for alarm. However, in June 2012, RBS’s notoriety cast an unwelcome spotlight on the bank’s massive IT failure.
The failure of RBS resulted in a service disruption that left 6.5 million customers unable to access their accounts and resulted in a staggering £56m fine.
What events led up to this colossal collapse, and what lessons can we learn from it? Today, we’re diving into the details you need to know.
A Large Government Entity's Failed Implementation
Behind the Failure of RBS: What Happened?
In June 2012, RBS initiated a software upgrade on its banking systems. Specifically, they planned to upgrade their CA-7 software, responsible for controlling its payment processing platform.
By all accounts, it should have been a simple and straightforward process. However, what happened next was catastrophic.
Just hours after the upgrade took place, RBS’ systems lost the ability to process payments for both individual and business customers. The central issue occurred in the computer system responsible for transferring money overnight between accounts.
The failure affected not only RBS, but the other banking entities in the greater RBS Group, which included NatWest and Ulster Bank.
Customers Lose Access to Critical Banking Data
It continued like this for several weeks, affecting the personal and professional lives of the bank’s millions of customers. At the time, the Financial Conduct Authority (FCA) reported that customers lost the ability to:
- Log in to online banking platforms for account access
- Use the ATM to obtain their account balance
- Make their mortgage payments
- Access their cash held in foreign countries
- Maintain their payroll commitments
- Finalize audited accounts
As the bank tried to correct the issue, more problems and setbacks occurred. Customers began to notice incorrect debit and credit interest amounts, as well as inaccurate bank statements.
Two years later, in November 2014, the FCA fined the RBS group £42m, followed by a £14m fine imposed by the Prudential Regulation Authority.
5 Lessons Learned From This Software Upgrade Failure
The RBS failure underscores the importance of approaching each IT project with proper planning and precaution. Here are a few lessons organizations can learn from these mistakes as they plan their own software upgrade.
1. Ensure Clear Processes and Procedures
The failure at RBS happened on a Tuesday. However, the technicians who performed the software upgrade weren’t able to perform a successful batch run until the following Friday.
By the time this happened, millions of customers were already frustrated, unable to process their payments. The backlog had started, and they were playing catch-up from there.
We don’t know if RBS had clear procedures governing the upgrade steps to follow, but it’s clear that either they lacked clarity, or their team deviated from them.
We always recommend outlining clear procedures and holding teams accountable to them because this ensures that you’re able to spot problems much sooner.
2. Remember that Testing is Key
Any time you’re planning a major IT upgrade, it’s best to test the new solution in a production environment. This helps you gauge its effectiveness behind the scenes without affecting your live system.
If the upgraded CA-7 system were tested in such a space, it would have revealed issues. Instead, the system went live and the problems occurred immediately after.
3. Work With Experts
The partner you choose to implement a software upgrade matters. They should have a deep understanding of the software suite, as well as your company as a whole.
When the failure first occurred, reports speculated that RBS’s recent outsourcing efforts might be to blame. Before the upgrade, the bank had outsourced some of its CA-7 support staff to India.
However, these concerns were put to rest, as RBS continuously denied a connection. In any case, the installer at the helm of your upgrade should possess the knowledge and skill set required to carry it out successfully.
4. Use Risk Assessments to Uncover Vulnerabilities
Upon deployment, the software upgrade created a litany of issues. Sufficient system testing and a risk assessment could have helped the team prevent these issues before they occurred.
A formal risk assessment is a critical step for any IT change, even ones that seem routine. This process can help your company better understand the problems that might occur and how to respond if they do.
5. Create Continuity and Recovery Plans
The RBS failure was shocking on its own, but the weeks-long recovery effort was equally mind-blowing. What should have been a quick fix dragged on far too long, adding to the overall destruction.
Especially for a business of this size, the delay was inexcusable.
Business continuity and recovery plans help you establish key steps to follow if your project suddenly goes south. Following them can help you get back on your feet quicker.
Further, a continuity plan outlines the strategy you’ll use to maintain operations in the midst of a failure. It also includes processes to minimize service outages and downtime in the background.
Similarly, a recovery plan details your strategy for restoring data and critical applications if a failure or disaster affects your systems.
The Path to a Successful IT Upgrade
Software upgrades are essential to keeping your ERP system running at top capacity. Such upgrades should be approached with the same level of care and planning as an ERP implementation.
In the case of the failure of RBS, it’s clear that an airtight plan should have been in place before they began. In other words, they should have defined their upgrade approach as well as the steps they would take to reverse the damage if something went awry.
As you move forward with your own ERP upgrade, we’re here to help the journey go much smoother. Contact our ERP consultants below today for a free consultation.