In this episode Enrico Signoretti talks with Boyan Ivanov of StorPool about the state of data trends, strategies and new architectures in the ISP market.
Boyan Ivanov is a CEO and co-founder at StorPool. He started programming at the age of 10. At the same age, he started his first venture and the latter stuck. He has been in the enterprise and SME worlds, working in IT, banking and financial sectors. He has also been part of several startups, prior StorPool and now he’s focusing on improving data storage for companies building public or private clouds.nBoyan puts his efforts in working with Cloud Service Providers, Shared Hosters, MSP business leaders (CEO/CTO/Owners) to greatly improve business performance and profitability through designing, deploying and managing fit for purpose Storage Solutions.
Enrico Signoretti: Welcome everybody. This is Voices in Data Storage brought to you by GigaOm. I’m your host Enrico Signoretti, and today we will talk about trends, strategies and new architectures in the ISP market with a special focus on storage of course.
Major service providers like Amazon, Microsoft and Google have been growing like crazy for years now, but there are a lot of second and third tier providers that are doing very well. The reasons for their success are many. Some of them working in particular niches, others are pretty strong in a specific region and some of them are particularly cheap or they have very high quality standards and SLAs, and all of these without mentioning other competitors like MSPs (Managed Services Providers), for example that are doing pretty well too.
My guest for this episode is Boyan Ivanov, CEO and co-founder of StorPool, StorPool is a European startup that proposes a software defined storage solution which is becoming very popular among service providers. Hi Boyan, how are you today?
Boyan Ivanov: Hi Enrico, I'm quite well thanks.
Thanks for joining me today, why don't we start with a brief introduction of yourself and your company?
Okay. My name is Boyan Ivanov, and I'm one of the founders of the company and the so-called CEO which is more like ‘Chief everything officer.’ We started this company in the very end of 2011. Over seven years ago, we were building storage in the software stack; back then and still today, storage solutions that are running on proprietary storage boxes that are storage-only devices can be easily replaced with standard servers running storage software on them. And this is what we do.
We kind of help companies to build software defined infrastructure, software defined storage. We've grown the company over time and still the majority of our customer base is in service providers. These are infrastructure as service (IaaS) companies, MSPs, quality and costing, but also we have a fair share of pure play enterprise or telco customers.
Very good. As I mentioned at the beginning, your solution is doing pretty well with service providers and you confirmed that. But how did it change the work of the service provider in the last seven years… during the life of your company?
There has been a lot of changes. I think one of the significant changes is the shift to cloud and have a lot of the service providers that are either competing with the big guys like the Amazons and Azures of the world, or you have companies that are becoming more niche with better support, better local presence etc.
But the overall market is growing quite well, like regardless of the big players growing at a very fast pace. You still have local players growing quite a lot as well. So I think the market is not as sexy as it used to be in the beginning—seven or even more years ago, but still good growth and a lot of the companies that are doing well are looking for quality and are looking for solutions that give them a competitive edge. So from this point of view, still a healthy market.
So what do they look for in data infrastructure when it's time to expand or renew it?
There are, broadly speaking, two types of companies. One are the companies that are looking for the cheapest solution on the market because you're playing in the low margin segments of the market and you're all about price. The other segment of companies broadly speaking is about value and that's value for money. So they're looking for innovative solutions that can help them improve or add new services, but they're not necessarily looking for the lowest price. They're looking for very good price/performance or price for quality metrics. So these are basically a couple of segments that you can distinguish in this market.
But if we categorize ISPs into, let's say three groups: small, medium and large. Do you see any difference in the way they operate and the way they think about their infrastructure?
Definitely. So smaller guys usually are more competitive and they usually compete on price. That puts them in a position where they can’t afford large investments or to change ahead of time, so they basically have a short term focus and they're very cost sensitive. Mid-player guys are usually well positioned, so they don't compete head on with the very large enterprises. But they are kind of like in a niche, -- be it local market, be it very good particular service to the customers and the wrapping of the product.
So these guys are looking for how they can improve their infrastructure and make it future proof in the next three or four years. So these guys are spending money in a different manner. They're looking to get a competitive advantage or to keep the advantage that they already have, where big guys, while there are some exceptions, so a lot of the big guys are kind of like a financial operation. So they're thinking, ‘How can we get a larger market share, how can we acquire smaller companies?’
In many cases their investment focus is not even on the infrastructure as much, but on how can we improve our financials, how can we grow market share? In many cases, either they go with traditional infrastructure model, so they continue to buy like the two or three storage pools or storage pools from 2-3 other vendors, without looking into efficiency, because their goal is kind of capturing market share. In other cases these guys are looking at what would be the next generation infrastructure. How can they replicate the inner workings of Amazon or Google? And in this case they either try to build something themselves or in other cases they develop the infrastructure internally.
I see. What is happening in the upper layers? Because you know if you look at the enterprise space, there are technologies like OpenStack that somehow you can think about them as dead technologies. You don't see them developing anymore, you don't see that option, so what is happening in the ISP space?
That's a very interesting question. I think the cycle of getting the new technology that's cool and hyped to the cycle when this technology actually gets adopted on the market, and the point at which this technology is becoming a legacy technology is shortening. So this cycle of interesting hyped product and then becoming all kind of obsolete, or standard, or not cool technology is very short now.
So it's true on the open stack side we'd be exhibiting and been part of the OpenStack community for quite some time now. And this year they rebranded it to Open Infrastructure Summit, which wants to show that the OpenStack movement is not as strong as it used to be. I have new things that are much cooler, like containers and Kubernetes. In my view, interesting things that are coming would be Software as a Service (SaaS) and Function as a Service, are the things that are going to be in the next wave, but maybe we're a year or two away.
So in a way to answer your real question, I really see a lot of the projects that were… service providers that are running OpenStack. A lot of them failed in the end, so they had to replace it with something else. I see open stacks as a good solutions for large enterprises on some telcos, but for service providers the very, very large cloud providers were running OpenStack, but for everybody that was in the middle, there was some success here and there, but most of these guys are still not running OpenStack or keeping away from it.
You mentioned Function as a Service, serverless, but you didn't really mention Kubernetes. Do you think that Kubernetes can have a space in this market or not?
It does. But from my experience with service providers, many of these guys are trying to cater to their existing customer base, and this base obviously changes over time, but still the ‘bread and butter’ of this market is companies or individuals who are running their websites. And these websites usually need a VM. Sometimes they have new age applications that require containers and they cannot extend their platform, but still the majority of these guys are adding containers as a new service.
But your traditional service stays pretty much unchanged. I mean it has incremental improvements. So we also see Kubernetes, but in my experience we see Kubernetes mostly on the enterprise side where people are putting micro services in containers or have applications that are from scratch developed to be cloud native. And we haven't seen a lot of service providers that are building a huge containers in Kubernetes forms. So it's kind of like this mismatch of where the use cases are stronger for containers and for service providers. Are they ready to go to AWS or you see actually containers used more in enterprise and telco environments.
Got it. Last week we also had a chat and you mentioned object storage. There is a rise in demand for this type of storage. And you know some interesting use cases I would say, with the small and sometimes fast object storage requested by the end users. Maybe we can dig a little bit more on this. So what are the use cases these service providers see? What is the kind of infrastructure they are planning for objects storage?
That's an interesting one because we start to see object storage becoming of practical interest to companies this year actually. We've seen a lot of object storage projects for companies that were doing web applications or new age applications that are cloud native and everything that has video or photos or applications that need to write data once and not rewrite the same data. So historically that's been the newest type of storage on the market and therefore less applications were traditionally available. Now this is the fastest growing segment.
And if you look at traditional service providers, they used to run virtual machines—sometimes containers but that's usually websites, email and web servers, so that's kind of their typical application is still on the stack. It's usually using bulk storage and these new applications were directly going to something that has an object and S3 is the de facto standard there. So if you had an object storage application or use case you—by default—were usually going to AWS in the service provider domain. Now in enterprise you usually go and get an object storage software to turn it on space, on prem.
So with service providers, I see customers that are coming to them actually this year asking, "Hey I have this new application that I'm building, can you provide me an object store?" And I've had a request from our customer base and other customers or other prospects that are reaching out and asking “How can we have the traditional thing that we have which is running virtual machines requiring bulk storage, and then add objects storage to that?” So I think this is again twofold. You have the enterprise cloud native applications on the one side, and object storage is just coming to the traditional service providers this year or next year.
Very good. And what about files? Are these ISPs following the same path of large providers, meaning block came first, then object and then files?
Rarely. I think file was more applicable to enterprises where we have Word and Excel and PDF files and you're working on this level of obstruction, but for service providers, there obviously file is important. But in my experience you have the underlying infrastructure in bulk which is running virtual machines, and then it's kind of like a hop over to object storage—new age applications that are growing at such a fast pace then it's just the hop to object. You have bulk and object, you still have some file, but I don't think you're going to see a huge growth in file in the coming years in service providers. It’s going to go the object approach.
And back to block storage. What is the status of NVMe and flash storage in general? I mean are they adopting a NVMe? And if so, do they really take advantage of it?
The short answer is yes. So I think, as this storage is now the de facto standard that's usually SATA SSDs, ideally the data center. Great. We see some service providers and other companies trying to go and run mission critical stuff on consumer grade SSDs which is very scary, very risky in our experience. So a data center’s SATA SSDs are kind of the de facto standard. We see this year, last year or the year before that we saw some demand for NVMes, but this year I expect like 90% of the new storage systems that we're going to deploy are going to have a NVMes in them. Why? Because the margin of the price is not that large compared to SATA SSDs, but then the performance and more importantly, the latency metrics and the NVMe over fabric are much better.
So I think the actual driver for NVMe is not populous numbers, but it's latency, which in our experience is the number one metric of how low the latency can be, -- and that's a huge driver of NVMe and also NVMe type of solutions.
So do they look at the NVMe over Fabric and NVMe-TCP as the next step for data storage infrastructure, or is it too early?
Oh not necessarily. If you look at NVMe-oF, it's supposed to be a standard, but if you look at what are the solutions on the market, pretty much most of the vendors have their own kind of implementation or drivers that emulate NVMe-oF but are not the standard. StorPool includes that, like we have a set of technologies that do kernel bypass that talk to Linux directly, they don't have to use CPU for that, and deliver extreme levels of low latency like we can do 50 microseconds and 5 microseconds reads, so extremely fast systems that are with NVMe-oF like technology.
So from this point of view, a team then customer is usually looking at a way to reduce latency to the minimum latency possible and what is the technology in play that they talk about in NVMe-oF, but what they actually mean is have a very low latency protocol to access any NVMe devices, NVMe-TCP. I don't see it—at least in our market segment.
We're also not using the TCP stack at all because it's too slow for high performance storage applications. So the short answer to that is I think that NVMe-oF type of solutions are very interesting because of the very low latency/high performance, but also deliver a scale of feature rich intelligence storage system to the end user that has quality of service that can do snapshots and that's the team that would be ideal for any enterprise or service providers.
Earlier you mentioned that a service providers are looking at two ways to think about their data infrastructure. I'm working with GigaOm in the States these days on a report about composable infrastructures. In my research I found that composability makes sense for large infrastructure in the range of thousand or maybe dozens of thousands of servers. Are these providers looking at solutions to change the way they design their infrastructure, thinking about composability as one of the options?
In my experience, not yet. So that's an interesting idea. But my experience is that the larger players that have the scale to think of more custom things in their stack—at this stage are still looking at how they can improve their building blocks, meaning making custom servers or custom racks or making a system on chip things that are offloading CPU or network cards; and still they're on this level of how can we optimize and specialize building blocks rather than doing composable infrastructure things.
I think that composable infrastructure is an interesting concept that might take hold in a couple of years, but at this point in time, you can see even thinking of guys like Amazon that were kind of building their own chips based on ARM-technology and IT. I see more of that on the market. Like guys are taking special components, or taking programmable processors and putting some very custom logic for their large scale infrastructure and optimizing it on this component level rather than on the whole system design level. And I think this will over time make more sense as this technology matures more and the protocols interconnects and everything starts to work well together and have enough options that you can just go and buy as a whole stack.
But do you think that CPUs like ARM for example would be adopted by medium and small service providers as well? Or it's just something for very large providers that can give more options to their customers because of the size?
I think it's early days. I mean this trend will develop over three to five years maybe, two to five years, but I see that sort of application that would benefit from ARM servers and we actually see rising demand for ARM servers. I don't believe that the future will be 100% cloud. We're not 100% public qualities, you have some things that are on prem. It's more of a hybrid cloud and half teams that are being pushed to the edge, and for edge computing which doesn't require huge processing power but requires power efficiency, then ARM processors make a lot of sense.
So in a way I think there is a good fundamental ARM technology also because it's finally catching up to some of the low end x86 CPUs. So from this point of view, they're more power efficient, but now powerful enough to run some applications. So I think I see ARM moving over from the mobile world to more data center compute and also to edge. So from this point of view, I expect to see more ARM in the future.
Boyan, this was a very nice chat, but I'm sure that our listeners would like to continue this conversation online, so why don't you give us a few links about StorPool and your Twitter account and social media handles for you and your company.
Thanks Enrico for having me. If somebody wants to find us, we're easy to find at www.storpool.com, and all the social media handles basically are at Twitter. We’re @storpool, LinkedIn, as well Facebook @storpool. I think the only difference there is we're Storpool Storage at YouTube, where we have a bunch of videos and interesting content and people can check out our blog actually at www.storpool.com/blog.
We have an interesting comparison of storage related technologies. But we're coming up with an interesting comparison to different public cloud offerings that's going to be rather cool. And we also have a very interesting presentation there about why latency is the number one metric of any cloud, -- which I found had a very good response amongst our readers. So that's an interesting piece to check out.
Great! So this is a wrap then, and thank you again, and bye bye.
Thank you Enrico.