In this episode Enrico Signoretti talks with Piergiorgio Spagnolatti about the interaction of the storage backend with front end applications, focusing on banking industy systems in particular.
Guest
Piergiorgio Spagnolatti has been working for Banca Popolare di Sondrio since 1995. Coming from a dev background, he worked his way into the system administration space and embraced a growing number of topics spanning the whole infrastructure & security world. He now leads the infrastructure teams for the bank, and while his passion still roots into infrastructure, he is keen to explore and dive into every component of the full stack. He also has a deep passion for technology in general, and for tech people, which lead him into co-founding the Italian chapter of the VMware User Group (VMUG) back in 2010, then joining the VMUG Board Of Directors in 2013, and ultimately becoming VMUG Vice President in 2016. He has been recognized the vExpert award for several years, and loves public speaking at IT events. He also has a private life, where he humbly tries to grow his knowledge in particle physics and quantum mechanics, while trying to finish writing a novel that has been in the works for the past 20 years.
Transcript
Enrico Signoretti: Welcome everybody. This is Voices in Data Storage brought to you by GigaOm. I’m your host, Enrico Signoretti, and my guest for this episode is Piergiorgio Spagnolatti. He is the Head of Infrastructure at Banca Popolare di Sondrio, a prominent Bank in the north of Italy with more than 500 agency and branch offices located in major cities as well as smaller towns and villages. Piergiorgio, ‘PG’ for his friends, has being promoting and leading the evolution of IP processes and infrastructure in this organization for more than 20 years now. A real refiner of technologies and modern computing models for which he is a strong promoter and only [PhD] in his organization. And last but not least, he is Vice President of the VMware user group. Hi, PG. How are you?
Piergiorgio Spagnolatti: Hi, Rico. Thanks for having me today.
Thank you for your time and by the way, did I miss something about your professional profile because we are always together in social media talking about everything but actually this is the first time I checked your LinkedIn profile.
No you didn't miss anything, so thank you for the great introduction as well.
Okay. Really good. I had this idea of interviewing you after seeing a beautiful picture you shared with me a few days ago. It was the first data center of Banca Popolare di Sondrio in the 60s if I remember well. Not only a nice picture but it’s always amazing to take a look at that past especially considering the pace of innovation in the space and how we interact with our banks today, right?
Yeah. It is a picture from a long time ago and you can imagine punching cards and these fridge- like machines that we actually even don't know what they were used for. But it's been a long walk and there has been a great evolution in terms of IT infrastructure and the bank’s services offered to our customers and internal users, of course.
Yeah. I know that you have everything. The mainframe is still there, right?
The mainframe is still there and it's still plays a big part in the overall service architecture and application architecture of the bank. It's tied to many, many constraints that we have in the software arena, in which we play in the Italian software space for banks in particular. There is this kind of break when it comes to trying and deciding to get rid of legacy infrastructures, legacy applications because the bank relies on that so much, so it's not easy to just have a clean slate and start from scratch with the new platforms and new application architectures.
Yes, but at the same time you have this nice web application. Most of your customers interact with you [using] a web application today or even in the mobile app.
Yeah, absolutely. Starting from, let me say, the late 90s, we've been establishing an internet presence in the early stages for the Italian market at least. We went on from there. Driving the innovation from our side and also listening to the needs of our customers that ultimately ended in developing a number of internet facing services that actually allow our customers to interact with the bank without going to the branch offices, while the branch offices are still there and still play a major role for the traditional customer base of ours.
How do you manage that interaction that is still necessary between this front end layer on the web and with the containers and on the back [end] you have the mainframe still.
This is definitely one of the biggest challenges that we have mainly because the new kind of architectures that tend to break down barriers and boundaries and allow for extreme scalability, extreme freedom of action in terms of, let's say, the number of environments that you have available for your web app developers to give an example. The flip side of that is the legacy architecture which is built upon a another kind of bank, schedules and service hours that don't match the 24/7 ‘always on’ approach of web applications and definitely, you have to have a predictable scalability model for the mainframe, which isn't the case for a web application in the first place.
Our approach is to mix and match a number of approaches to seamlessly integrate the traditional transactions that have to be integrated in order to maximize the investment protection that we have done in the legacy application. And at the same time, find out if there are any other application architectural models available so that we can [provide] for the scalability and security and ease of use for our customers. It's a constant challenge, constantly a research in the market to find specific architectural solutions that can allow us to bring the two worlds together.
You mentioned the mainframe on one side and the leading edge which is container on the other but actually there are two other technologies that you didn't mention. I don't even know if we can define it as legacy, but actually all the rest of your infrastructure, okay, that probably is made out of several different machines now. On the other hand, I'm sure you are looking at cloud computing. Right?
Yeah, absolutely. The funny thing is that it used to be the bleeding edge of our infrastructure. I mean, virtual machines for example. Over time they became the traditional workload not the legacy ones because it sounds better but that's pretty much what they came to be. We have say 1500 VMs now running the majority of internet and internet services for our internal and external customers, and we are exploring definitely the chance of expanding into a hybrid cloud scenario to take advantage of the value that the cloud can give us in terms of flexibilities, scalability and geographic dispersion of your application architecture and so on and so forth.
So, you add everything to the mainframe then the VMware infrastructure. Let's go to the more traditional on-premises server infrastructure, okay? And then the container, and now the cloud, but everything is about how you manage data on top of it and you have a few challenges I think, about not only what everybody else is leading today with GDPR for example in Europe, but also banking regulations and stuff. So how do you manage this data layer on top, so all this data moving from one silo to the other? And all the challenges that there are on top of it.
There are a number of challenges in managing all the aspects that are tied to data management in general. Some of them are on the technical side, but over time I can safely say that those specific challenges might be solved by new platforms, new protocols and paradigms that you can use to address specific purposes. For example, we don't rely on old flash storage on the mainframe because we just don't need it, but we rely massively on flash storage for the traditional workloads with VMs because we need fast performance and lower latency for online workloads. And this is just an example of how you have to keep in mind the exact requirements that you have in your application layers.
But the very complex challenges come as you said, from the regulatory environments and the laws that the bank has to respect in order to do its job, so, it's not only GDPR, it's also the EC too, and a number of regulations that come from the central authorities, be it generally speaking the ECB [European Central Bank]. In this case, the there is a strong attention from these central authorities and the way that you're dealing with this specific topic. They have a strong focus on how you manage data and security, on how you manage privacy, on how you manage the externalization of your internal services to the cloud provider or traditional outsources.
On one hand, you have these regulators that knowthe market and know that they have to define boundaries to allow the banks, in my case, to operate safely. And then flipside of that is that many of my colleagues, even in other banks, have a problem with the perception of security and privacy and how it goes on with the technology. The traditional general rule, is that the bank cannot go to the cloud because of privacy and security concerns but they end up to be only concerns because if you look at the contracts, if you look at the actual structure of the typical massive cloud provider, they go way beyond the departments that you might have. It all boils down to really getting into the details of the contracts, in the best practices that you can share with these actors and employ some kind of flexibility in your internal regulations and the external ones to figure out if you are really compliant with the general regulations of the ECB in our case.
You’re telling something really, really interesting here. In practice, you're telling us that the cloud provider has better security and better data governance than you can provide on your own, right?
Yes. I can claim that because I think that you can safely say that just by looking at the numbers. Generally speaking, except for huge banks maybe, the scale that you have to deal with on a daily basis doesn't even compare to the global cloud providers. So, you cannot industrialize these kinds of approaches to, say privacy or security, and they haveto because they need to deal with not only banks but also healthcare or oil and gas.
Different kind of markets, each has its own specific requirements so they have to raise the bar when it comes to security and privacy, generally speaking. What happens to the customer, the actual end user in my opinion, is to do the appropriate due diligence because you have to be aware of theirpractices, so you cannot just rely and trust the default provider to do its job. You have to do proper due diligence and be a perfect auditor when it comes to running the services and controlling the services, so they are running as you might expect. At the end of the day, in my opinion, the responsibility for the availability and performance of the services that you put on in the cloud is still yours. You have a vested interest in making sure that the cloud provider is acting accordingly.
Right. So, you have to be…, it’s a two phased thing. So, at the beginning, you have to design services in a way that can’t rely too much on the infrastructure that the service provider has, so you design full faith. Okay.
Yeah absolutely.
On the other side, you say, okay, but I want to check constantly if they are delivering on their promises.
Exactly. That's the point.
But who manages the integration between the cloud and the rest of the infrastructure? I know that you have a greater capacity for structure, so if you moved from mainframe to VMware kind of infrastructure and then continuous moving to the cloud is easy… actually, relatively easy I would say. But actually, providers like Amazon or Microsoft, they give you a lot of services that are databases and the load balancer and stuff so you don't have to build everything on your own, right? Do you have somebody that helps you in this journey?
Well not actually. And that's because the bank traditionally has had this leverage on the specific legacy infrastructure which has a peculiar thing. The applications are tightly coupled to the underlying infrastructure. So you are used to having these kind of coupling and when it comes to decoupling, starting with virtualization and then containers and then function as a service, serverless computing. The more you move towards these kinds of evolution, the more you have to have your own understanding of your application nature and architecture right? So this is something that you have in your know-how.
And ultimately that's the new kind of job that you have to do because it doesn't matter that much where you put your workloads. It matters if you know exactly whyyou're making these specific choices using that specific cloud provider, using that specific application architecture, using that specific middleware or part of the way in terms of development. I think that this is a process that we have to go through as traditional IT managers.
It also helps to interact with the cloud because you learn a lot of things from people that have to deal with the infrastructure and services and middleware at scale. The ultimate thing and the result that you get from from that is that you probably can generally scale down this approach and adopt at least some of the governance models that they have put back in your own prime infrastructure and architecture that you're using for your applications.
So you’re saying that you build the knowledge over time and the idea is now that you understand what could be the next step, okay? It’s common sense, but that raises another question. Does the banking community somehow share this kind of common knowledge or is it seen as a competitive advantage? So nobody wants to talk with the others?
Well it's a mix of the two approaches. Traditionally, we have been participating in national, local or even international communities of this specific market. And there is a general consensus on the way that you can share your issues, your requirements, your needs. Ultimately, this leads to a kind of standardization of the general needs that the typical bank has at least in the EU. There are also these new kind of approaches which tend to blend together the technology that you're using to provide the services and the service and so on. So this becomes really a competitive advantage.
So if you're using a platform, for example, to develop your mobile applications, it allows you to go into production in weeks instead of months, which is the classic example. Thatis a competitive advantage because your arena is not tied to the traditional customers that have specific accounts within your bank but you're facing the internet. So that's the challenge from the technical standpoint and governance standpoint and that's the opportunity. Definitely nowadays, we see a number of areas where people and my colleagues from other banks tend to hide their strategy because they want to keep their competitive advantage.
This sounds fair, I understand that. So we want to share the general knowledge, but actually keeping the tip of the spear for us because so we are sure that we are more competitive? You have a massive distributed organization again. At the beginning, we said that you have 500 branch offices and local agencies and some of them probably are in the center of Milan, so well connected, well everything, but some of them are in the markets, right? [In] very small villages not very well connected. So when you talk about banking infrastructure with all these new services and fancy applications… what are the challenges to bring the same kind of services to all your customers in the same way?
Absolutely, yes. You can easily guess that I don't ever get bored, because of these challenges. But again, I see a two-phase approach. If you rely and judge the reachability of a specific user base whether it’s a internal or external, in rural areas for example, just by the bandwidth that they can rely on, you're not expanding and making your services evolve over time. Ultimately, what we need is to think backwards.
What if we redesign this kind of service to take advantage of all the value that comes from the Web 2.0, the containerized architectures, the lambda functions or whatever and bring it back to a consumption model that mimics what the average user does at home with the mobile interfaces, with the tablets and stuff like that. Ultimately, you are relying on a number of technologies that are ready for internet consumers and might be ready for the internal customers as well.
And it comes only down to a specific topic and it is how good you are with the technology that you're using in trying to figure out how to make this service available with narrow bandwidth, without latency and with remote locations that they don't have local support or anything like that. It teaches a lot on how you can optimize the resources that you can rely on in locations where you can not just buy more bandwidth or reduce the latency. Ultimately, it's a great lesson that you can re-apply to internet services as well.
To give you an example, we've been using HTTP compression for our web applications to the branches since the early 2000s. Then we decided to move that approach also to the internet facing application even if we didn't notice any kind of slow down or need in terms of compression and bandwidth reduction. This is just an example to give you the idea that you go back and forth from the internet services that drive the innovation to the internet service that drives the business and the one is constrained by technology, the other is driven by technology. But ultimately you can get a lot out of this...
So in the end, it's all about continuous optimization. So you design a model...
Absolutely. Yes.
You design for it and want to get the best features, the best everything and then you optimize to make it available to the wider audience?
That's correct.
In this evolution of your architecture, what is the next step for your organization?
Well, the next step at the edge of what we are doing at the infrastructural and architectural level is very close to the boundary of the dev teams. So we are really in this dev ops kind of phase where we are trying to mix and match the the knowledge that we have inside the bank to bring the major applications that we have to another application environment and basically, we are trying to define what's a good fit for serverless architectural, for a container based architecture, so that we can leverage the kind of padding that comes with containers in general and that could allow us to have free choices when it comes to decide if you are deploying the workload on-prem or if you are deploying it to the cloud or you if you are doing some kind of hybrid scenario.
Generally speaking, this needs to do a lot of work in refactoring the application architectures and they're really having a good grasp on the points where you have to have statefulness for your applications and when you can have statelessness, which is ultimately what drives the speed at which you can go towards containerized the arena and then containerized economy, generally speaking.
The refractory part is fundamental to repackage the application for the new technology and then give you more freedom in where to deploy that?
Yeah, you're right. Freedom is the key word here because you're not doing this kind of refactoring for the technology's sake; you're actually doing to decouple all the elements that build up your traditional top of the infrastructure and application architecture. You're going to have a greater and deeper knowledge of your own application stacks and even look at them with a more critiqued view and redefine what's needed and what's scalable and what is not and so on and so forth.
By adding more freedom as we said, you will be able to move more applications to the cloud. Do you see your virtual infrastructure shrinkingin the future? Or the number of applications, the number of services that you are planning for the future means also that you will have a growing virtual infrastructure as well as a growing, maybe exponential growth on that container cloud part of your infrastructure?
I think that probably the virtualized… the traditional workloads will eventually shrink in number. And that's because the whole container and Kubernetes and cloud native and serverless thing is not only good technology, it really allows people that are involved in the application life cycle to really work better. So even if you think of software houses which traditionally speaking, don't care too much about the packaging of their artifacts. It really changes the way they interact with their customers and the quality of the things that they're providing. So I think that these actors, the developers in the first place are going to drive this move towards the containers and ultimately that is going to move a lot of workloads towards the containerized applications.
So that means that you're planning to make containers your atomic unit, okay, somehow and reinvent this orchestration layer. That could be delivered on premises or on the cloud in either fashion. Does it mean that you want to push too much on the services provided by major players like Amazon or Microsoft, for example, so that you are not exiting the location of the mainframe to the location of the cloud providers?
That's a very good point. The general rule when it comes to externalizing services for a regulated market like the finance one, the banks, is tied to the fact that you haveto have a strategy to bring back what you put out. It really helps to have some kind of application architecture that allows you to decouple from the specific implementation of that hypervisor or a container runtime or whatever. But, if you're relying on a specific service, say for example the Amazon VOS, you have to know that you won't find it if you're going to Asia, or you for going back on-prem unless you can rely off some kind of traditional framework that allows you to decouple from the specific services but you're not taking all the advantages that those services provide in that case. So, it's really challenging to find the sweet spot between going full cloud and using leveraging of the services that are being optimized for the specific cloud provider in order to shorten the application lifecycle, and provide a better use of the actual cloud resources.
On the flip side of that, being able to keep your knowledge and to move your workloads, if needed, to another cloud provider. Again, I don't see, as of today, an easy solution out of that. You really have to balance case by case, project by project, if you’re going towards a better suite with the cloud provider or staying with… an on-prem scenario which might cost way more in the first phases, but it gives you a lot greater freedom with the evolution of your application.
I totally agree with you. So you have to evaluate it case by case and understand what you really need. Sometimes you can afford to buy a specific service from a provider because it can help you to deliver quick care of your application, but actually that becomes small looking that you will pay later and that's quite challenging. So that was a very nice conversation, PG. Thank you very much for your insights on the banking industry. On the infrastructure, on the fact that now the data governance would be very tough for you guys. So, where can our listeners find you if you want to dig a little bit deeper into what you do everyday?
I think the easiest way to find me is through my LinkedIn profile, so I invite you to contact me there and we'll chat.
Very nice, and the website for your bank?
Yeah. It’s www.popso.it.
Thank you very much, PG and bye bye.
Thank you, Rico. Was a great pleasure. Bye bye.
- Subscribe to Voices in Data Storage
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS