Technology, Top News

Visa blames ‘rare’ networking glitch for payments borkage

VISA HAS BLAMED a “rare” networking switch failure for the prolonged outage it suffered at the beginning of June. 

The admission was made in a letter to Nicky Morgan MP, chair of the House of Commons Treasury Committee, in response to a series of questions about the meltdown, which caused Visa credit and debit card payments to be rejected at retailers across the UK throughout the afternoon of Friday 1 June.

Visa Europe now admits that the disruption affected payments for more than 10 hours, from 2.35pm to 12.45am in the early hours of Saturday 2 June, although most of the problems were cleared up by 8.15pm.

“Visa connects the financial institutions that issue cards to their customers (issuers) with other financial companies who ensure that merchants are able to safely connect to the network (acquirers),” explained the organisation in its letter.

The company’s data centre operations team became aware of what it describes as a “partial degradation” in Visa’s processing system at 2.35pm.

“We immediately… initiated a response based on protocols we have in place for addressing any type of critical incident; the first step was a Technical Response Team assessment meeting. 

“Soon thereafter, we escalated the matter in alignment with our crisis management protocol. Ninety minutes after our first indication of a systems issue, and having confirmed the underlying facts as part of our crisis management protocols, we provided a public statement to the media,” the letter added.

It continued: “We operate two redundant data centres in the UK, meaning that either one can independently handle 100 per cent of the transactions for Visa in Europe.

“In normal circumstances, the systems are synchronised, and either centre can take over from the other immediately. The centres communicate with each other through messages regarding the system status, in order to remain synchronised.

“Each centre has built into it multiple forms of backup in equipment and controls. Specifically relevant to this incident, each data centre includes two core switches… a primary switch and a secondary switch. If the primary switch fails, in normal operation the backup switch would take over.

“In this instance, a component within a switch in our primary data centre suffered a very rare partial failure which prevented the backup switch from activating.

“As a result, it took far longer than it normally would to isolate the system at the primary data centre; in the interim, the malfunctioning system at the primary data centre continued to try to synchronise messages with the secondary site. This created a backlog of messages at the secondary data centre, which, in turn, slowed down that site’s ability to process incoming transactions.”

The glitch affected a minority of payment attempts, Visa claims. Failure rates fluctuated throughout the ten hour period, but an average of nine per cent of transactions failed to process on the cardholder’s first attempt, the organisation writes in its letter.

Disruption peaked, it added, at between 3.05pm and 3.15pm, and 5.40pm to 6.30pm. At those times, an average of 35 per cent of transactions failed.

Since the failure, Visa asserted, it had updated its incident response processes “by applying any lessons learned”, following a post-mortem conducted to identify “all necessary steps to prevent a reoccurrence”.

The organisation adds that no cardholder should be charged for a transaction that did not complete, including instances where the transaction failed to process, but a ‘hold’ for a pending transaction was placed on the cardholder’s account by the issuer. µ

Further reading

Source : Inquirer

Previous ArticleNext Article
Founder and Editor-in-Chief of 'Professional Hackers India'. Technology Evangelist, Security Analyst, Cyber Security Expert, PHP Developer and Part time hacker.

Send this to a friend