Podcast Episode

Voices in Data Storage- Episode 32: A Conversation with Veeam

:: ::
In this episode, Enrico speaks to Anthony Spiteri and Michael Cade of Veeam about data protection across physical and virtual data storage in the age of ransomware.

Today's leading minds talk Data Storage with Host Enrico Signoretti

Guests

Michael Cade is a Senior Global Technologist for Veeam Software. Based in the UK, Michael is a dynamic speaker and experienced IT Professional who meets with customers and partners around the world. As an active blogger and social media persona, Michael is influential throughout the industry. His expertise and advice are sought after on Data Center technologies including Virtualisation and Storage. Michael is a leading member of multiple technology organizations, including VMware vExpert, NetApp A-Team, Veeam Vanguards, and Cisco Champions.

Anthony Spiteri is a Senior Global Technologist, vExpert, VCIX-NV and VCAP-DCV working in the Product Strategy group at Veeam. He currently focuses on Veeam’s Service Provider products and partners. He previously held Architectural Lead roles at some of Australia's leading Cloud Providers. He is responsible for generating content, evangelism, collecting product feedback, and presenting at events.

Transcript

Enrico: Welcome, everyone. This is Voices in Data Storage brought to you by GigaOm. I am your host, Enrico Signoretti. Today we will talk about data protection. Of course, data protection is something that evolved tremendously in the last few years. We started with a physical system, moved to virtualized infrastructure, and we saw a big change there. Actually, the cloud really changed everything.

Now we have an environment that spans between the public cloud, hybrid cloud, private cloud, Saas applications and mobile devices. It’s crazy, right? We need to protect it all, especially now with ransomware, malware of any sort. Solutions that are on the market today are somehow changing as well to cover all these and maybe more. To talk about this, I invited today Anthony Spiteri and Michael Cade. Both of them are senior technologists in the product strategy group at Veeam. Hi, guys. How are you?

Anthony Spiteri: Hello, how are you?

Before getting into the details, maybe we can start with a short introduction about yourself and what you do in the company.

Anthony: Yeah, no worries. I think I’ll start, Michael. I’ve been with the company just over three years now. I actually started almost at the same time as Michael. We both work in the product strategy group in Veeam. What that means effectively is that we’re the conduits between our research and development team and the field.

When we talk about the field, we mean internally with our sales organization, our SEO organization, but also, more importantly, customers and partners. The team is responsible for product feedback directly between those two sort of lines. It’s a pretty cool job because not only do we get to interact with our customers and partners, but we also attend most of the major conferences around the world, present at them; and also locally in the region we attend lots of functions and lots of conventions there as well. We’re lucky to be able to create the content and present in front of people. That’s effectively what we do. It’s a pretty good job and good to be working with Michael most of the time.

Personally with myself, I’ve been working in IT all my career. The majority [of time] I’ve been working in the service provider space. That’s my area of expertise. I’ve been lucky to work at ISPs and ASPs and hosting providers and infrastructure providers pretty much my whole career before I started at Veeam. My focus is certainly around the cloud, working with our Veeam cloud and service providers, and just seeing what’s happening in that area as well. That’s me. Michael, do you want to explain what you do?

Michael Cade: Yeah. I’m almost exactly the same as Anthony. I think another focus that really has grown as we’ve grown in our IT careers is around community. We’re members of the vExperts, Cisco Champions, all of the awesome community programs out there that are really vocal on Twitter as well with our wider community people out there. For my personal background, I’ve also been in IT since day one where I started building computers for Cambridge University in a small little village out there and then moved up into more of an infrastructure guy, so virtualization, storage – storage being probably my biggest background. As you move into these new worlds of hybrid cloud and where that data resides, that’s a big shift and a big change for me and also for the whole community.

I didn’t ask you about Veeam. I think that everybody in the IT industry knows about Veeam. Maybe a short update also about Veeam could be of help.

Michael: Like you said, Enrico, everybody has probably heard of Veeam. We’ve just ticked over 13 years. We’ve got 365,000 customers across the globe. We really started as a focus around virtualization backup specifically around VMware. Then over the last 18 months we’ve really brought in that data protection angle, built out a platform where it allows us to protect SaaS-based work within Office 365, agents for both Windows and Linux and some other older legacy-type infrastructure. We’re still focused on that backup piece, but broadening more into the other aspects of where that data can reside.

What I wanted to discuss with you guys today, as I said, now the world is hybrid, at least this is the direction that many companies are taking. It’s a little bit more difficult than in the past, taking care of all this environment when we have different silos at the end. It’s not a single huge data center. Actually, we have data that are now created and consumed everywhere.

What do you see in the field from your customer asking for this kind of data protection? Are they still looking for point solution? Do they want to protect their Office 365 in a different way than they do with their virtual machines or are they looking for something that is more integrated on a single platform?

Anthony: I’ll take this one. I think it’s interesting. You talked about this whole new world of the fact that people are consuming different types of platforms for their applications and where they store their data. That means not only is there data sprawl, but there’s tremendous data growth as well. Then they both pose particular challenges.

In terms of the sprawl and the ability for organizations to back up their critical data across all platforms, what we’re seeing is people are only really reaching an inflection point now in terms of them understanding that they need to look at this from a holistic point of view. Before, they understood they had to have some VMware virtual machines, and they had to back that up. Then if they had a little bit of information in Google somewhere on Google Drive, they might think about backing that up. What normally happens is the pieces of data that were offsite, not on-premises, but maybe in cloud-based situations and platforms, they weren’t really considered as data that needed to be backed up because a lot of people thought because it was in the cloud, it’s backed up natively.

What we’ve seen specifically in the industry in the past 12 to 18 months is that organizations are becoming more aware that even if you’ve got your data sitting in a SaaS-based platform in a drive somewhere that’s cloud-based, you still need to consider back up. Just because it’s in a cloud, it doesn’t mean that it’s going to always be available. That’s one of the big things I think we’ve seen in the past 12 to 18 months is the realization that these different workloads need to be backed up – still, in a similar way to what you would have considered your on-premises workloads.

That’s why we’ve seen tremendous growth inside back up of Office 365 product. That’s actually become our fastest growing product of all time. The growth of that is tremendous, but it’s only been able to grow because people are more aware that they do need to back up their Office 365 data. It’s definitely a different world, Enrico, than what it was even two to three years ago.

What I am trying to understand here is more around their strategy. Do they think about the cloud, for example, Office 365, as a separate silo or as a part of their entire infrastructure? I mean this because sometimes the guys that manage Office 365 are part of a totally different team organization.

Anthony: Yeah, that’s an interesting one. My personal view on it is that at the moment it’s still relatively siloed. That’s more from an operational perspective, like you say, because there are different teams typically. The infrastructure team might not be the ones that are looking after the software service exchange, as an example. I think to a certain extent they are siloed, but what they do want and what we’re finding, obviously, is that they want a particular vendor to give them the flexibility to back up all those siloed platforms.

Holistically they’re siloed. They’re all separate. I think what organizations do want is a single platform that is going to have all the ability to back up all of their data across multiple platforms. Michael, what do you think on that?

Michael: This is a trend that’s kind of come into fruition now. People are now understanding a little bit more about what that data set is and where it needs to reside to get either more efficiency with less cost, whether it’s performance or whether it’s there to provide a better business reason for it being there, a better business outcome. People need to really understand what that data is. That then determines where that data needs to reside in things like the SaaS-based products.

Really the way forward in terms of being able to remove that headache of looking after the underlying exchange environment and the infrastructure behind that, the operating system that then runs on that, the clustering, the HA, by being able to just migrate into Exchange Online, Office 365, you take away that headache. Anthony, I know you were an Exchange admin back in the day as well. You know what that’s like.

Anthony: Yeah, for sure. That’s the change in dynamic of what it is to operate something like Exchange. It’s effectively been offloaded to the cloud now. That just changes the way in which the IT ops need to think about backup as well.

You guys come from two different parts of the world. Michael is still in Europe, and Anthony is in Australia. What do you see in these different regions? Are the adoption of these tools and these methodologies similar? There are some regions that are way ahead. We know that everything happens first in the US usually from the cloud perspective.

Anthony: It’s interesting, actually. If you talk to even VMware, us as well, they will actually say that the majority of innovation or a thirst to try new things actually happens in New Zealand first. Ironically, New Zealand seems to be leading the world not only in the fact that it gets the day quicker than everyone [else] because it’s first in the time zones, but they seem to adopt technology [more] quick[ly]. That’s reflective in the whole of Australia and New Zealand. Australia and New Zealand was always traditionally the most highly virtualized region of the world. Probably now everyone else has caught up.

Certainly in Australia and New Zealand, for some reason, virtualization took hold quickly. Then the percentage of virtualized workloads took hold even quicker. That actually was a similar pattern to the adoption of software as a service as well. It’s quite interesting here.

What’s really interesting about that though is that in the wider APJ region, which we kind of look at after a little bit, the uptake of virtualization and cloud and software as a service is actually probably five years behind the rest of the world. It’s really interesting that Anzac has always been first to adopt, almost like a test bed. Then the rest of Asia has been a little bit slow, almost the last in the world. Michael, what do you think about England, the UK, and Europe?

Michael: Europe is a funny one. We can literally get across the whole of Europe in four or five hours’ worth of flight. You hit X amount of countries over the route. One of the biggest things for us is the location of where that data has resided, especially with the recent last couple of years, 18 months. It’s been around regulation of data, GDPR compliance from an EU perspective.

We have lots of little or smaller countries that we have to potentially keep data locked within the four walls of that country for regulation purposes. I think that’s a big challenge that I’ve seen especially this year. People need to understand what their data is, why we need to keep it, and then they need to understand where they’re keeping it and make sure that it’s also regulated from that perspective.

From this point of view, we can say that the US comes first. Most of the technology starts there. Then we have regions like APEC, that maybe there are some early adopters, very advanced, but mostly they fall short in the wide adoption, while maybe European regulations force us to look into new things and in general to look at data protection more closely. We have a lot of small countries, and everybody at the end of the day has to think for their local regulations or sometimes also with the idea that they don’t want to keep data too far from their business. It’s a little bit patched kind of thing that maybe we can see in every segment of IT, not only data protection. You’re talking about regulations.

The advantage of being data protection vendors is in the fact that you’re collecting data from a lot of sources in the company. What I’m seeing more and more often now is that on top of the traditional data protection tools, we are adding more. Many vendors are working on the mechanisms to dig into the data and try to find information useful for business, for compliance. I know you launched in May at your corporate event something really cool around that announcement. Maybe we can talk a little about that announcement in general, when the source is backup.

Michael: I’ll take this one. I think one of our key focuses starts with (potentially) around backup and recovery. Now it needs to span... We’ve just spoken about where clouds are moving into these various different pockets of infrastructural platforms that we can leverage as a production point and be able to use those workloads. Obviously, we need to be able to back that data up. Regardless of where it is, it needs to be backed up.

It needs to be made available so that if anything does happen or if there was a problem, a failure scenario, then we need to be able to recover that. That can be potentially the first bullet point in what we believe is focusing around cloud data management. Just because those workloads today sit in Azure, AWS, or on-premises in vSphere doesn’t mean that in between that 90 days it is the most relevant place or most efficient place for that workload to reside.

We’ve got to then look after the mobility of that data or that function of being able to take workloads from on-premises and push them into the cloud or back again and put them into a different cloud just based on a level of reasoning and why that is, whether it’s efficiency, like I said before, whether it’s speed, whether it’s cost, whether it’s just being able to provide more agile infrastructure for that because of Black Friday. We’re recording this actually on Black Friday.

Being able to burst that workload into the public cloud where we have ultimately infinite resources to be able to use and then on Monday or Tuesday because we’ve got Cyber Monday coming up as well, we can pull up that data, we can pull that workload back and put it back on-premises where we know we’ve got this constrained infrastructure that we’ve already paid for and we don’t have that burstable cost as well that we have to consider.

Then we get around to the areas that you mentioned, Enrico. How do we provide some sort of analytics to that data? How do we provide the ability for our end users to go into that data in an isolated fashion and be able to actually pull some insight out of that and be able to provide a better way of – we know it’s costing X amount here or it’s going to be more efficient if we run it on-premises?

Also on that same vein is the governance and compliance that we’ve also mentioned. How do we push that into an isolated environment? How can we enable people to do more with that data to then really extend or make better business choices around that as well? All of that really ties into orchestration automation in terms of being able to make those decisions or act upon those decisions or that insight we’ve gathered from the leveraging of that data.

Also, it’s interesting to see if I look at the market now with several vendors working on our leveraged data that you actually have, understand the data better, understand the value of all the data that you have in your company, it comes to two different approaches. One is build an application to do some data management, data analytics, or whatever directly on top of the data protection platform. The other one is to give others the mechanism, APIs, or whatever to access this data, specialized applications that are built by third parties or by their community so that it’s not the data protection vendor that has the full control. By giving it to others, maybe you can find solutions that are more focused on the market segment because sometimes if you’re an SME, you don’t have the same needs even for analytics or whatever than a larger enterprise.

On the other side, by having everything integrated, maybe it’s easier to do other things. I know your stance on this because you announced that the product is in preview. I’m talking about the data integration. You announced first in May and then last week in Tech Field Day again data integration APIs. What is the status of this product and your expectation? What are your customers asking from it?

Anthony: This is interesting because this goes to the whole situation where once you have backup data, what do you do with it? How do you make it more valuable for the customer? Data integration API is the first step for us. We’ve, to be fair, made that data work before. We’ve had things like data labs, which allows us to instantly mount backup data for validation just to make sure the backups have gone through. We’ve done stuff in the past, actually make the data that is backed up valuable.

Where this is going is more about activating the data, having these massive data lakes that have you back up files. What do you then do with them? The data integration API works by effectively exposing the backup data and matching that data to some system or some platform, which then can be accessed by a third-party application to run analytics over. It’s interesting because traditionally we’re very agnostic. That’s one of our core pillars as a company.

We want to be agnostic for our customers. We don’t want to dictate what particular piece of software or hardware our customers use. We give them flexibility of choice. This data integration API is very much built with that in mind because what we’re doing is matching the data and then letting our customers choose their tool to either run a check against and try to find some ransomware, try to find credit card information within that data. That’s what this has all been laid out for.

That’s going to be part of our v10 release, which is due very early in the new year. It’s going to be a feature of that release. Effectively you’ll get that when v10 comes out. We’re quite excited about that because we’re keen to see what our customers are going to be doing with this functionality and what it’s going to offer them.

As far as I know, there are already a few integrations coming from the community that are pretty cool. You have this very large community across the world and over time. Sometimes getting these little pieces of code that just resolve your day sometimes, they are pretty good, and they come for free because usually it’s open source.

Anthony: That’s right. If you look, we’ve got what we call VeeamHub, which is a GitHub page where community members can basically put their code for everything. That’s actually been around for a couple years. It’s very mature. It’s got lots of different code examples already in there. The data integration API is certainly one area which we will hope our customers will create new bits of code and new solutions, which hopefully we can share with the community. Like Michael said, that’s kind of where we’re at and what we’re thinking.

While we are talking about data integration API, you mentioned ransomware. This is one of the hottest topics in the industry. It looks like data protection is becoming the tool – not to prevent, but to fight ransomware. Usually when we talk about security, we think about firewalls, we think about attack surface on infrastructure. Ransomware is a very sneaky thing. You discover you’re attacked when it’s usually too late. Everything is encrypted and everything stops. It’s like a major disaster.

Anthony: It’s a huge issue. It’s got lots of public attention. Specifically where the data integration API can help is that because typically – you know, Enrico, as well ransomware basically lies dormant on the systems for months, even years potentially, before they actually get activated by some trigger in the system.

One of the good things about the data integration API is that you can manage your backup data from yesterday, two weeks ago, three weeks ago and effectively run it against antivirus protection or ransomware checker to try to detect ransomware. Therefore, what you’re actually doing is you’re detecting this dormant ransomware before it actually impacts you. That’s just a really good example of being able to activate the backup data before you get hit by ransomware. I think Michael even showed this a couple months ago at an event that we had.

Michael: Yeah, it was actually the Cloud Field Day back in April. As much as you’ve been attacked or you think you’ve been attacked or you’re generally just going to restore some data back into the live production system, then we’ve got the ability to do what Anthony just said about being able to trigger the anti-virus scan. What we’ve also done in that same API is expose it so that when we do things like Direct Restore to an AWS EC2 instance or in Azure, VM, or whether we’re taking a HyperVM and we’re moving that to vSphere or a physical machine and moving that to vSphere just from a recovery point of view, we can also trigger that scan, that antivirus scan.

We’re just telling the antivirus software to perform that scan. We’re not the security guys. We don’t have our own antivirus definitions. We just expose this feature out to any antivirus software that has a command line support function. We can trigger that antivirus to perform that scan and make sure that workload is not compromised before it goes back into wherever it needs to be.

Sometimes it’s not only about the data management piece, but the fact that if you have some sort of file gap in between the data backup repository and the rest of the infrastructure. Even if you can’t analyze the data and something very bad happens, you can still restore your data.

One of the major problems is that literally ransomware is very smart in how it attacks the infrastructure. The first thing that they try to do is to encrypt the backup so you can’t retrieve your data. It looks like this is changing. More and more vendors are trying to build immutable backup repositories to prevent malicious changes in the backup files, for example. How are you implementing these kind of techniques?

Anthony: Enrico, it’s almost like you were there at our Tech Field Day seeing the presentation that we gave. Part of the v10, in version 9.5 of Backup and Replication, Update 4, which we released earlier this year on the 22nd of January, we released a feature called the CloudSphere. The CloudSphere effectively leverages object storage as an extension of our scale-up backup repository to allow you to offload data from a local repository into an object storage repository. That object storage repository can be Amazon S3, it can be Azure Blob, it can be S3 compatible. Effectively what we did with the existing version that’s out is that we moved data from the local on-premises location to the object storage. Part of v10 is we’re enhancing that to add a couple of things.

The first thing is a copy mode. We’re going to basically have the ability to instantly copy the data from the local performance tier when it’s created by the backup engine and effectively copy that into object storage. What that does is it creates a whole new copy of that data in the object storage. You’ve got two distinct copies, one being offsite in object storage.

In addition to that, we’re also introducing an immutability feature that’s compatible with Amazon S3 and also S3 compatible that supports Amazon Object Lock and versioning as well. What that’s going to do is that effectively puts immutability lock on the most recent backup files. You set that as part of a policy when you create the object storage repository. Then if you set it for 30 days, it will mean that as soon as the backup files get created locally and are then copied into the object storage, they’re immutable for that set period of time, which effectively means that ransomware has no way to actually change those files that are up there. That’s one way to protect it.

It’s not ransomware protection per se in that it’s not going to stop something happening locally. You’ll still get hit locally. You’ll still have the ability to recover. Fundamentally what’s different is that once those files get in object storage, they’re locked. They cannot be altered. They can’t be deleted. They can’t be changed.

You can basically remount them, re-inject the backups into a backup server anywhere and effectively restore from that point forward. The immutability is something that’s coming in, and we’re really looking forward to that. We think it’s going to be a very big feature.

We are talking about a simple approach to problems like ransomware without spending a huge amount of money. Creating complex infrastructure is easy for everybody. Then you have to manage them. You have to pay for licenses or hardware, etc. With this kind of feature, you can bring ransomware protection with just a bunch of best practices and some cloud storage. From my point of view, it’s not the large enterprise that is at risk with ransomware as much as a small company. They don’t have the budget. They don’t have the tools to make sophisticated scanning on –

Anthony: Exactly, yeah. That’s correct, Enrico. That’s why the fact that we’re simple, reliable, flexible, but also agnostic, so not locked to any particular hardware, we’re going to work with anything. We’ve made it very simple for this feature to be effectively just checkboxed, and you set the policy, and then your backups are protected and are immutable. You’re right; it’s going to be a very simple solution for those smaller companies.

We talked about a little bit of everything here: data protection, SaaS. We’ve got data protection being at the core, also some data management stuff being important for security, ransomware. It looks like the more and more we go on, data protection becomes a critical component, even more than in the past in every infrastructure. What do you think about the next steps? Where are we going as an industry, and what are users asking for the next step of their data protection strategy?

Michael: I’ll take this one. I think at the moment we’re in this adoption phase where we’re seeing a lot of customers really heading into that hybrid cloud mentality. They’ve made a huge investment for that, the systems, their infrastructure, their platform that they have on-premises, yet they can see absolutely the benefit of the public cloud, the hyperscalers, and you can see a lot of the conversations that we’re having that they’re absolutely trying to leverage that platform within the hyperscalers, whether it’s Azure or Google or AWS or any of the others that are out there. I think that has to be a focus from a data protection point of view.

We’ve actually got next week at AWS re:Invent, we announce our ability to agentlessly protect AWS EC2 instances in a very easy, simple fashion. There will be a lot of news around that next week. Two weeks ago at Microsoft Ignite we announced exactly the same for Azure. If you look at both of the interfaces, they look exactly the same. They’re just leveraging APIs underneath that perform different functions at the relevant public clouds. That’s really going to allow us to protect those workloads natively within AWS and Azure.

The biggest point to that is yes, that allows us to protect those workloads that customers have moved up. The format of those backups is still going to be in that Veeam portable data format, the VBK format. Now that format can still be read by Veeam Backup and Replication on-premises. All of the recovery options we have, whether it’s Guest File Level Restore, whether it’s application item restore, or even the Cloud Mobility storage that we have around being able to take that data, take that workload and convert that into an Azure or AWS VM or EC2 instance. That whole flexibility that Anthony briefly just touched on is expanding more into the platform to allow us to protect those cloud-based workloads.

It feels like there’s a toe in the water around platform as a service, people actually migrating from on-premises databases and pushing them natively into more RDS, whether it be SQL as a PaaS solution. Looking at how we protect those workloads from our customer’s point of view and researching into that and how that needs to look, just because we did it on-premises with SQL, then it’s going to be potentially a different approach. We can’t just ‘lift and shift’. To put that into perspective, if you’ve got the traditional infrastructure guys that me and Anthony are, but we’re moving into more of this cloud and cloud native with our workloads, it’s very much the time that virtualization hits. A lot of the data protection vendors out there just took their agent that they were protecting from a physical point of view and put that onto a virtual machine.

Veeam comes along and changes the way we protect those virtual machines in an agentless fashion. Then it’s really about how efficient, how fast performant can we be in those areas. I think we’re at an inflection point now where whatever we do next is not going to be the same as what we’ve done before. It has to be a different approach. Then as we move into that PaaS-type workload, then you’ve got more cloud native, changing the way the actual application even looks and feels, so more containerization. We’re absolutely looking into those areas as well as how people adopt those.

Speaking quite closely to VMware’s engineer, VMware’s show this year, and I know you were there, Enrico, was very much around Kubernetes and how they’re going to provide their infrastructure. Their operations guys have known vSphere for the last 10, 15 years with the easy button to start getting into Kubernetes and start really focusing on that area. From a data protection point of view, it’s a very different function. It’s a very different look and feel. We have to be very mindful of that on how we protect the state of data that comes with that.

Something that you said is really interesting. When you start looking at protecting data in different types of environments and then converting virtual machines, for example, in AMIs or whatever, that’s pretty cool. Data protection platforms are also becoming somehow a data migration source. In the long term, I can think about this, especially because of data integration APIs, for example. You can start backing up Microsoft SQL and then converting into something else that is available in the cloud, this kind of thing.

Again, at first sight, if you think about backup as a liability, as a way to protect your data only, that’s a little bit of a boring thing. Sorry, guys. I know you work in that field. If you think about different aspects that data protection brings to the table, we’re talking about many of them, many potential use cases on top of the data you’re protecting, then it’s fantastic. We have a lot of potential there that we can just explore with a little bit of integration and some features that maybe will become one click and go. From my point of view, it’s amazing. From this point of view, the world is really changing.

Michael: Yeah, it really is. I think you’re absolutely right, Enrico. The backing up is kind of a table stake. We need to understand what their data is. How can we take it back up as fast as possible? It’s kind of table stakes.

All of our competition, we can all do it. We can all back up. The first cool part is how quick can you recover? How quickly can you recover, and where are you recovering that from? Then I think you get into: what else can we do with that?

Can we leverage that data to do something more? Can we leverage that data to migrate or potentially offer a self-service-type sandbox environment to our developers or to our security team and be able to do something along those lines? Can we offer that data out for data classification-type compliance or even just reporting so you understand what that data is and just have a little bit more focus around what is that data? Then you’ve got the whole monitoring analytics.

I think that’s really where the coolness for me is: one, not knowing what’s coming from a roadmap point of view because there’s so many different angles that we’re exploring and going down. The Cloud Mobility is exactly that. It’s workload migration, workload mobility, being able to move data wherever it needs to be based on insight that we’ve got from a company or a monitoring point of view. Then also being able to orchestrate that workload; where does it need to go because of cost, because of performance, because of XYZ? That’s the exciting part from a data management point of view. It’s not just about backup. It’s about all of those other different areas and all of these new areas that are available to our customers as well.

Anthony: Also, what makes me excited – it’s funny, Enrico, because if you had asked me three years ago whether I’d be willing to work for a backup company, I would have said “No.” The reality is that Michael and I keep very busy on a number of different angles. When I look at our technology and what it enables, it’s quite amazing. It’s not just to back up. Backup is the start. That’s what Michael has been saying with the table stakes.

We obviously have a great service provider committee that offers Cloud Connect backups. They offer a cloud repository that’s very easy to back up into. They also offer replication services, so being able to replicate more Tier 1 workloads with the press of a button for DR purposes. I think that’s more than just backup.

I think one really good example of the coolness of the technology and innovation that we have is even in that cloud tier that I talked about, you mentioned migration. When the new copy mode comes out in v10, that cloud tier can be moved for migration purposes and migrating with really small RPOs as well. Like I said, as soon as you create the copy of the backup file, it gets copied into the object storage. From there you can recover it anywhere as well. Then using Instant VM Recovery, which is our patented technology, you can bring up those workloads instantly on a vSphere or a Hyper-V within seconds.

Backup in and of itself, you’re looking at what backup is traditionally. It can be looked upon as a bit bland. Like you said, it’s what you do to activate the data, it’s how you take the different technologies of what a company like Veeam offers and is innovating around, and how you make it work in probably ways you wouldn’t have thought they would have worked for you before. That’s a really exciting part of being in data protection today, I think.

That’s good. I think we had a great conversation, but it’s, unfortunately, time to wrap up this episode. Maybe we can finish it with a few links about where we can find information about the Veeam community, you guys on Twitter, and maybe Veeam itself.

Anthony: My Twitter is @AnthonySpiteri. I also blog at Virtualization is Life!, which is AnthonySpiteri.net. Like I said, a lot of the content we put up is also on the Veeam.com/blog website. From a community perspective, I’ve got GitHub going as well. That all leads into our Veeam GitHub page, which is called VeeamHub. From a community perspective, that’s what you want to be looking at. Michael, do you want to talk about your particular –

Michael: From a Twitter point of view, you’ve got @MichaelCade1. I’m pretty active. We’re both pretty active on Twitter. Any questions that you’ve got that you’ve heard today about, we’d be more than happy to answer. I blog a bit over at vzilla.co.uk.

The only other resource that I’d add in there is the Cloud Field Day, Tech Field Day we did. The Cloud Field Day went really into that Cloud Mobility in that multiple data format. I’m really interested in terms of how we do it different[ly] than the others. More recently the Tech Field Day that we’ve just done, Tech Field Day 20 was where we got to go into a little bit more detail around Cloud Tier in particular and the new features coming in there [that]we’ve already mentioned today, as well as our NAS backup that’s coming with the v10 release as well and a few other new things that are coming later on down the line we’ve got to mention in there.

Fantastic. I think we can call it an episode with this. Thank you again for your time today, guys. Bye-bye.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.