search trigger icon
search close button

The Continuing Evolution of Your IT Infrastructure

Strategically Speaking
Sep 26, 2014

RKline Author: Ray Kline,

I started my career in banking IT and operations in the late 80s. Over that time I’ve seen a quantum change in IT infrastructure and computing in general. One of my first jobs in IT was numbering punch cards as a proctor in my college’s mainframe operations department. In that job, the biggest fear you had was dropping your box of program cards that may not be numbered already. One of the most significant changes since those days is the shift in IT processing and infrastructure from In-house to outsourcing (now called cloud computing). This change started with core processing in the 90s, then progressed to item processing, and has now moved to the Windows environment. Jack Henry & Associates has been at the forefront of this shift starting with its Broadway & Seymour acquisition, which marked the start of our OutLink Processing Services division that we know today.

There are two main hurdles that have to be overcome for a robust cloud computing model: communications and a centralized, shared processing platform. Both are imperative for the cloud to be accepted as a viable option. Core processing was a natural fit because it historically has had low bandwidth requirements. Shared computing is actually how core systems have always been designed using mini-computing platforms such as the IBM Power 7 (iSeries) and RISC UNIX processors. Users access a central platform with a “dumb” terminal. Thus, all of the processing happens on the central platform and not locally on the workstation. In this case, data and applications do not traverse the WAN, but instead the only thing sent over the WAN is the screen display of the data.

Windows, on the other hand, started out as a client/server processing model which was historically much more bandwidth intensive. In this case, a full client PC logs into a server remotely, but the majority of the processing occurs on the local client. In this model, the data and application traverse the WAN back and forth to the PC client. The client PC then takes that data and processes it locally. This is also referred to as a thick-client processing model and just by the nature of its architecture requires much more bandwidth – especially in the case of item or document imaging applications. However, with the advent of server virtualization and thin-client technology, we are now able to provide a shared centralized Windows computing platform with a predictable, relatively low bandwidth footprint. This first started out with software as a service (SaaS) solutions such as, where a single application is provided.

The modern IT environment is moving one step further to infrastructure as a service (IaaS) where most, if not all of a financial institution’s Windows servers are provided in a cloud environment. This can include domain controllers, application servers, email servers, SQL Servers, etc. As we continue to define the many benefits associated with this type of delivery model, our customers are starting to see the benefits of moving the Windows server infrastructure into the cloud. In fact, this is almost the same conversation we’ve been having for several years now with our customers concerning core and item processing. In this arena, outsourcing or cloud is now accepted as the best way to implement those systems. Twenty years ago this was definitely not the case, with most customers running their core systems in-house. If you need to think of it differently, think of companies in the early 1900s converting from generating their own power to using an electric utility.

Many financial institutions find it challenging to keep pace with the latest technology, regulatory compliance, and customer service demands. If you decide to shop for an IaaS provider, look for one that enables you to seamlessly move Windows and Linux-based IT infrastructures to a private cloud as an on demand service, leveraging its many benefits. This will move you away from the in-house revolving-door hardware model and significantly reduce the risk associated with internal IT regulatory compliance and disaster avoidance. 

There are many benefits to an IaaS model, including:

  • Helps FIs keep up with and manage technological advancements, regulatory compliance directives, and customer service demands. 
  • Allows your FI to stop chasing hardware and frees internal IT resources to focus on more strategic initiatives. 
  • Delivers infrastructure resources as a fully outsourced on-demand service. 
  • Provides extended expertise beyond your own IT staff. 
  • Piggybacks on the success of your existing outsourcing/cloud initiatives, e.g., core or item processing. 
  • Enhances business recovery by mitigating the impact of any local disaster. 
  • Controls costs, since you typically only pay for what you use.
  • Offers greater security because your IT infrastructure is presumably being managed by an organization with vast experience.

Maybe you shouldn't keep your head in the clouds, but you may want to consider keeping your IT infrastructure there.

subscribe to our blog

Stay up to date with the latest people-inspired innovation at Jack Henry.

blog subscription image
floating background gradient

contact us

Learn more about people-inspired innovation at Jack Henry.