These days we are all looking for faster routes to market for our software solutions but the overriding factor for those of us working in payment systems is risk management and ensuring business continuity. Whether you need to add additional channels to your existing business to become an omni-channel retailer or add support for additional payment functionality such as Host Card Emulation as an issuer it’s vital that your core business continues to perform well. There are many approaches to managing the change that you want to implement but we’ll specifically address the main three in this article:
The ideal is that all the people in the chain required to implement a new feature can work together; this means often getting teams of architects to work together from mainframe, middleware, networks, infrastructure and so it goes on to come up with the solution and to get it put together in a timely fashion becomes increasingly complex as all these parties must plan the implementation of the new feature and ensure that the extra loads on the services provided will be ok. This leads to extremely long project timelines and often a lot of slippage to even put through a minor change (for example supporting extra fuel data for customer verification after automated number plate recognition) as the teams try to agree the way the additional information should transit the system, be stored and viewed and often disagree on the way information is interchanged.
So whilst this puts the project timelines at risk this is often going to produce a very polished and professional end product where the systems will be easily supportable into the future and maintain compatibility with each other for some time to come.
When trying to implement new functionality rather than get all teams to work together simultaneously to pull off a change by formally documenting each piece and working simultaneously instead work at it piece by piece (e.g. change authorization platform, then switch, then network, then PoS). In this way the risk to the full solution is minimised as rolling back the change is much lower risk and testing can be performed in isolation. Working in this way means that as each piece of functionality comes on-line it should be possible to ensure that the existing functionality continues to process correctly however you won’t see if it all works together to do what it should for a considerable amount of time, this means that the project is again at risk of slippage because the resources that made the changes will have been returned back to the development pool some time ago making it hard to schedule the fixes required.
An interesting technique is to sit something in front of your core payments architecture to pull off the “new” traffic and process it in the way required. The major problem with doing this is centralising the monitoring of the solution since you may well have added an additional payment switch but the benefit is that you can pilot more quickly newer payment methods since the risk is much reduced. Say as an issuer we wanted to accept the fuel card validation for number plate; with an additional system sitting side by side we can fire into the legacy platform the standard requests for authorisation and simply perform the number plate validation in a second separate system in parallel. What this means is that with little to no slow-down both pieces of validation can be performed and that the core payments infrastructure hasn’t been touched. This has drastically reduced the risk to the core platforms and risk to project plans however it has increased the risk to monitoring the overall solution unless significant effort has been put here in staff training.
This is what we’ve seen happen most often, a switch (middleware of some sort) will filter of the transaction whilst the new functionality is piloted, this could be on a per-merchant basis or similar. Then transactions are processed for some time with the extensions on a “pilot infrastructure” whilst network loads a proved and the formats are proved then either the project is run to upgrade the core platform whilst the pilot platform continues to process the load or alternatively the pilot architecture is deemed “good enough” to be rolled into the core processing. What happens is normally down to the risk of making a change to the core and often that a few new extensions will be piloted together and then grouped for changes to be made on the mainframe or elsewhere.