Voices in Data Storage – Episode 6: A Conversation with Zachary Smith of Packet

:: ::

In this episode Enrico Signoretti talks with Zachary Smith about bare metal cloud, best of breed services, cloud alliances, as well as the benefit of standardizing some of the hardware for cloud implementation.

Guest

Zachary has spent his last 16 years building, running and fixing public cloud infrastructure platforms. As the CEO of Packet, Zachary is responsible for the company's strategic product roadmap and is most passionate about helping customers and partners take advantage of fundamental compute and avoid vendor lockin. Prior to founding Packet, Zachary was an early member of the management team at Voxel, a NY-based cloud hosting company sold to Internap in 2011, that built software to automate all aspects of hosting datacenters. He lives in New York City with his wife and 2 young children.

Transcript

Enrico Signoretti: Welcome everybody. This is Voices in Data Storage brought to you by GigaOm. I’m your host Enrico Signoretti and my guest for this episode is Zachary Smith. Zachary is co-founder and CEO of Packet, an innovative and alternative cloud service provider headquartered in New York. On his LinkedIn profile you can read that Zachary is a serial entrepreneur and innovator.  In fact he started a lawn mowing business with his brother and later he moved to IT. He has been focused on building highly automated infrastructure platforms for more than 15 years, and in 2015 Zack co-founded Packet. 

Packet is a cloud service provider focused on providing better math resources quickly and efficiently—leaving to its customer all freedom of choice on what's run on it, and maybe we will come to this later. Today we will talk about two topics that are somewhat interlaced to each other. The first is what I mistakenly called a ‘federation of clouds’ in my last meeting with him. And then we will touch on infrastructure compatibility—a topic for which I'm doing research for a report I'm writing for GigaOm. Hi Zach! Welcome to the show.

Zack Smith: Thanks Enrico. It's so nice to meet you.

Thank you for taking the time to record this episode. Let's start with the basics. I gave a very brief description of Packet, but maybe you want to give us a little more background about its roots and your vision.

Oh certainly, I'd love to. Well you're right, I did start first and foremost in the lawn mowing business. That was in Southern California when I was about 8 years old. Things have advanced a little bit more than that. I kind of took a segue through a career in music. I went to the Juilliard School to study double bass, and it turns out that my true love was actually with computers.

So at the age of 20 I joined a small infrastructure startup back in I guess it was 2000 or 2001 -- that focused on providing high scalability Linux based hosting solutions. It was an early early time in the cloud. I think it was called ASP at the time or maybe it was just turning into dedicated hosting. But we went through this incredible change where the original hackers of the Internet went from IT users to developers, and so they got to kind of have a firsthand seed at the DevOps evolution or revolution and sold that business back in 2011. It taught me a lot. I learned a lot and I literally grew up in the indie age of the cloud.

Back in 2014 we started Packet for a few different reasons. The idea and goal was that the user of infrastructure in the future was almost exclusively a software developer and we needed to make a highly automated platform that allowed them to develop software on the whole stack. We had seen other alternatives of what would be the leading public clouds today that were very focused on providing infrastructure but with a high amount of opinion in the software. And we said, “Could we provide an experience equal to or better around highly automated infrastructure but withoutthe opinion and abstraction layers of software?”

So we did that. We started in 2014 hacking on how we could provide this service and really questioned, “Did the market want or need an automated bare metal provider?” I think the answer to that was ‘yes.’ The cloud native movement—whether it was through Docker with Solomon or CoreOS with Alex Polvi—all the work had been done around the highly portable workload. Now you see that through Kubernetes and you know all kinds of different portable schedulers, really just accelerated our business. We arrived on the scene at the right time—where we were providing very fundamental infrastructure with strong APIs to a world that was highly empowered with software.

Okay. So ‘bare metal cloud’ is the right term to define what you do today. Right? And as a step forward in the realization of your vision, you are now promoting a new approach to the cloud—a sort of a federated multicloud, to put it in a very simplistic way. But maybe you can develop a little bit on this and let our listeners know more about this vision.  

Absolutely. One of the major benefits of the hyperscale clouds today is the breadth of services. You have a whole suite of integrated software and infrastructure based services that generally tie together very well. Whether it's through authentication or aligned legal terms or things that happen around billing—what we see is a world where we have highly opinionated software stacks that are coming and evolving. I'm going to call that ‘best of breed software’ whether that's through propriety software arms dealers or through open source, and you have this scenario where you have a whole collection of leading SaaS providers who are doing certain things like object storage or data analytics or machine learning and then you have infrastructure providers like ourselves who provide best of breed compute resources, access to storage without abstraction, pretty much disks on demand, network services etc.

The problem is that for customers, it's quite complex to deal with best of breed vendors. If you have three or four or five different vendors that you would buy equipment from and put it in your data center, at least you're working under a standard operating procedure which is: youare putting them in your data center. But when you start to buy and consume services—whether those are SaaS services, PaaS platforms or infrastructure services via cloud—suddenly you're relying upon the provider to be your operator. And this is where the ongoing contractual, billing, compliance things become really complicated.

So what we're trying to do now is say, “Could we create a new age of cloud alliance?” How could we build something where we could align and make it easier for enterprise customers to buy and consume diverse best of breed services from multiple cloud providers, but have the consistency and safety that they're expecting out of aligned business terms. So that's what we're starting with. We'd love to dive into details on where we think the most important ones are to begin or some good analogies. But you know we really feel that as infrastructure and cloud becomes an integral part in enterprise, that these issues are going to bubble to the top.

Yeah. And you and some of your partners already started to cut the transfer rate for example, and make services more accessible from your respective infrastructure. Right?

Exactly. Yeah. A couple of years ago we started what was called the Bandwidth Alliance. The original idea was ‘Why should we charge people for moving data between say CDMs and their origin providers like Packet or whatnot?’  And it moved beyond that—why would we want to charge them to move their data between two different clouds that are sitting literally a cross- connect away? And yet that's one of the biggest barriers right now to using best of breed services is the transfer charges and penalty—a tax that people pay for moving their data around.

And so we started this bandwidth alliance and what it was was ‘Could we zero rate very similar to the early days of the Internet where we were doing peering agreements where we would interconnect with different networks between say content networks and eyeball networks and say as long as we're interconnected over our routers and we don't have to pay to reach each other, we're not going to charge our customers, we're going to zero rate that?’ And so we wanted to extend that to the cloud. And so we said, “Hey could we take common services, and we worked with CloudFlare and we worked with Fastly and we worked with other CDMs, and we said “Hey can we zero rate, we're already connected in peering—can we offer that benefit to our users so that they weren't penalized?” And that just kind of built upon itself. More people wanted to join, especially as so much of the Internet these days is what we call East West traffic.

We're no longer sending most of our traffic to eyeball networks like a cable modem provider. We're sending most of our traffic to other cloud providers. And so what we decided to do was  expand this. So in the past few months we've worked with a few storage companies—most notably Wasabi—which is a leading object storage company that offers kind of high performance S3 compatible objects storage.

This is a very bespoke industry. Only a few people in the world offer high scale unlimited usage object storage farms, yet what we decided do was say “Can we make that a fully integrated experience?” Number one, cut the egress charges between our respective clouds and number two, start to work on aligning some of the more thorny business issues. So we went out looking for some examples in the market of successful complex industries that have done this. And we arrived at a few, which I'd love to dive into if you're interested in hearing it.

Oh yeah. So you talked about the APIs. So there would be a common API interface to provide storage from within Packet for example. But on the other side, we will have something similar from a data processor. You mentioned Wasabi for example. Okay. So  at the end of the month I as an end user get a single bill that summarized the sum of the different services that they buy from different providers. Right? 

Yeah. We call it ‘integrated but transparent.’ So the idea here is not that that we're basically re-selling and trying to become the object storage provider, right? The idea here is that we want to make the developer experience and the enterprise purchasing process really consistent. And so there are some technical sides of that which must meet the developer experience and integrate with Wasabi’s API into our Packet customer facing API. What does that really mean? That means let's make sure that you have common authentication. Let's make sure that our API clients and developer integrations quote “just work.” So can we simplify the life of the developer by providing a best of breed end point for object storage within the Packet interface?

And similarly on the other side, there's complexities to that of course. It's not simple. Developers are used to complex API changes and whatnot, but we really want to make that a first class experience. The second thing is around business terms. You're gonna be facing Packet. We want to give you a bill so that way you don't have to have two bills and two contracts and two relationships and two GDPR statements and everything else along the way, but that requires us to form business arrangements.

And for us, that's the more thorny issue where we're breaking new ground is ‘How can we be consistent in our business arrangements and our policies around for example customer privacy and our SOC 2 audits and our GDPR our compliance?’ And those are a little bit more thorny, and that's what we’re working through right now. And that's where we really had to look outside the box and say “Well what other industries have really attacked this issue in a scalable way for services?” And the one that we came up with was actually the airline industry.

You know if you take a flight on say United and you're part of Star Alliance, and you land in London, and your flight gets canceled for your connection, they have the ability to book you through a different airline. They have systems that are integrated that they have terms that comply in a very complex process. But they can give you that experience even while accessing somebody else's service. And we think that there might be a similar way that could happen within the cloud, which is: ‘How could we give Packet (if it's our customer) an end to end experience even if they're relying upon a delivery model from say an object storage provider like Wasabi?’

Right.  And do you see also an expansion of this partnership to competitive services? Because Wasabi is right: you are for the compute part, but actually maybe a competing service system or even different compute providers in the future in this alliance.  

Absolutely. I mean that's the good thing about an alliance is what we're talking about here is more like the Better Business Bureau of the Internet. I live in New York City and I've signed many commercial leases and if you've never done a commercial lease for an office in New York City, well, thank goodness. But what it does is use a New York City or New York State standard lease form. Literally every lease that you do whether it's from 500 square feet to 50,000 uses the same legal document and then you simply adjust things here and there. So this is a model for how we could work, but it's good for all landlords, even competing landlords because that time spent negotiating and working through and paying lawyers and assigning different terms is actually just ‘fat’ in the system. It's not good for the end user, doesn't get you faster to what you want to do, which is: have office space.

I think it's the same thing in the cloud and we hope that this is really an open… one of Packet’s core values as being community driven. I grew up and benefited from a community driven Internet. We think that's really important. Whether it's through open software or open standards—and this is very similar—is how can we take best of breed cloud platforms or SaaS services and align them because it's gonna be good for all of us. Will we have to compete? Of course! On the merits of our service, not on the alignment of our legal documents. That should be a foundation.

Yes I totally agree with you. So but let's change the topic a little bit. The other day I saw on LinkedIn a post where you mentioned a new hardware and some things about infrastructure compatibility for your infrastructure. First of all, let me start from the beginning of the story. So you mentioned Open19 Foundation.  What is it, what does it do? 

Open19 Foundation is a community driven foundation for the advancement of an infrastructure solution in the data center hardware world. To put it bluntly Yuval Bachar, who founded it, he built infrastructure at Facebook, he built infrastructure at LinkedIn, and when he went to LinkedIn he got inspired to create what would be called a subscale hyperscale solution.

When you're at something like Facebook or Amazon and you're buying millions of servers you have allthe benefits in the world. You can literally build your data center around your server. But for every other enterprise or end user, service provider of world who's not in the top 10, you don't have that benefit. So you have to work on standards. There had been some other projects that had worked to address some of the needs around hardware innovation. But it had done so really for that top tier buyer for really only the largest scale, and what Open19 is doing is creating I like to call it the ATX case—if you're familiar with PCs in the 1990s—the ATX case of the rack. Why is it that way? Because if you looked back at our history of innovating on PCs in the ‘80s, it was just – your IBM dominated world.

And then suddenly there were some standards around the motherboard size: ATX, micro ATX you know etc. And what this allowed it to do was allow anybody to innovate. You had a standard case size. And anybody could innovate on a motherboard and it would fit in the case, and you had a whole supply chain around shipping them and using them and cables that would go into them and all kinds of stuff. We actually don't have that in the datacenter. Every single server—even if it fits in a 19 inch rack—is different.

And so when you look at rack solution where you need cables and switches and interconnect and PDUs and everything, it's different everywhere. And so what is out there to solve is can we create a commoninfrastructure platform and allow people to innovate on what really matters—which is the computer not the case, not the rails, not the power cables. Those things should be standardized and open. And so the specifications for everything that's been innovated on in Ope19 are open and belong to the Foundation. They're not owned by any specific vendor and they're worked on as a group of us constituents who are either members from a vendor standpoint and end user standpoint or an enterprise standpoint. And I'd love to talk through more of the benefits that you get from it. But basically it's allowing somebody like Packet to invest heavily in our hardware delivery model and bring those benefits to a wider user base beyond ourselves.

In fact in that post you mention that microserver. OK that was a very very intriguing piece of hardware, you know getting those small servers. Can you go deeper a little bit on the architecture that you choose for these microservers? 

Absolutely. One of the main goals of Packet is to make it developer friendly to deploy your infrastructure opinion wherever it needs to be and that might be in my big data center. It might be in somebody else's data center. It might even be in your edge location at your office or your venue or anything else like that. And so what we've been really focused on is how do we give people a delivery model for compute.

We've invested in and I'm really proud to be on the board of Open19 because we feel it is an industry standard that we can all get behind. But along with that, what we found is that there's been a real gap in the hardware profile. Computers today if you look at the major vendors—whether that's through big OEMs or ODMs or whatnot—are almost exclusively built around the idea of a hyperscale data center, big centralized power hungry dense VM, lots of core, lots of RAM, lots of customization, very very I'm going to call them ‘vertical scaling machines’ and you know what the challenge with this is, is when you go to more distributed locations, you have a very small power footprint and today's user expects to have many diverse machines.

If you look at modern cloud users, they they don't want one big machine, they want a hundred other machines. They want to spread their workload around, use a scheduler, have Quorum. This is very important and whether it's storage or any kind of scale out web application. What we found is that there's been a need for a small independent microserver, something with a low cost, low power footprint and medium to low specs.

And so we found that a gap in the marketplace. There's a lot of embedded servers or let me call them ‘embedded solutions’ where you can get low end things. Let's call it a Raspberry Pi style, but not in a server form factor with the management tools that we need to be able to operate. So we worked to basically put four independent microservers. These are built off of an AMD EPYC 3000. The embedded version of the AMD EPYC 7000 series and four independent boards and a half width server sled. And what we had to do is do a little work on the mix—how could we make the network connectivitybothlow cost and high performance?

So taking four independent servers using PCE feeds basically bring those all back to a single  QFCP network interface and then channelize that out the chassis. And what's cool about this is now we get this very low cost, high performance—it's a four core, four thread chip running at about 2.4GHz, but a very independent server with excellent networking and we see that for edge applications, that's kind of the biggest driver is people are doing more and more with their network, and then they can move back to a big central cloud and use their kind of beefy scale up dual socket machines to do their data crunching or whatnot. So we're really excited, I think it's going to go and change definitely our profile. We've seen microservers be very popular in our portfolio but this really takes it to whole new level.

That's great. So you mentioned the CPU network, but you didn't mention anything about the storage or how do you provide the storage resources to microservers?

Yeah. So we're we're including a small amount of M.2 flash and that's about 120GB or 150GB of local SSD on every node. And then what we're exploring and working on...  we certainly have disaggregated storage options. The vast majority of our customers today use storage in line and then use something like a cloud service like Wasabi for object [storage], but there is definitely a need for high performance storage in a local environment.

So most of our work there has actually been around and NVMe over fabrics, specifically around the newer advancements in the kernel adoption for NVMe over TCP. So we don't have a product yet, but [in] our product roadmap we see a lot of that happening in 2019. I'm sure as a storage junkie you're well aware of all the products that are coming to market around MVMe over TCP or different fabrics.

So we're really excited about if we have a strong microserver with really good network. We're talking 10-25GB with offloading, where we partner with Netronome on this specific server, but also work with all the other MIC providers. Well then if we have a great network fabricwe really have the opportunity over over standard Ethernet to connect NVMe devices and we're really excited about what that could mean from an economics, utilization and performance [standpoint].

When will these microservers will be available?

Well we're working as hard as we can. The goal would be to have it out in late Q1 or early Q2. I mean all of our global cloud locations—which is 18 places currently around the world—and our merging network of edge locations. So you know the idea would be end of Q1 or early Q2. But you it's hardware, right?  So I'm sure something we don't know will go wrong and we'll have to go fix it. So that's the target. I got a sample on my desk if that if that helps.

Send me a picture maybe...

It's not pretty but I've got a sample…

And maybe you can share a few links about Open19 and Packet so our listeners can check for themselves what Open19 is doing.

Absolutely I'd love to.  It's a really exciting part of the market. We're working closely with a whole collection of hardware vendors, ODMs like Flextronics, OEMs like HP and Supermicro and then the entire infrastructure world including cabling like Molex. The idea is to remove cost and effort from the deployment. The big difference I'd say to your users is that this is the first time with Open19, except using say proprietary blade chassis, where you could deploy your infrastructures costs at a different time than your compute costs. What I mean by that is we can build all the infrastructure solutions in an Open19 standard rack. You remove the PDUs, you remove the rails, you put in a mechanical shelf, which basically looks like a cage with little bricks in it. There's no active electronics there and on the back is a blind mate connector for it 2,500GB channelized network cable and a power cable.

The cool thing is you remove the power supplies from the servers. How much does a good high efficiency power supply cost these days? Couple hundred bucks, right? You have to have one to two of them per server. So right away we're moving hundreds of dollars from every server by having an efficient power shelf and not having to put high efficiency power supplies in every server.

Number two is we take it so that way you can build all of that infrastructure: full rack, high density solutions, great optical cabling, independent power with full battery backup—all of that for thousands of dollars. Without putting any compute in the rack. And that is really cool because then you can pre build your infrastructure and whenyou have more capacity needs and you need to add that new compute unit or 10 of them or 50 or 100 you just slot it in. There's no cabling to do. And that's incredible especially if you're working at remote locations or you have a wide data center footprint where you don't get to drive down to the data center yourself anymore. And what this is doing I think is really changing the deployment model for hardware.

Yeah. This could be a solution also for enterprise but especially if you have remote locations and edge computing needs.  

And this is what enterprises look like today. The standard large scale enterprise has dozens of data centers. They are not big but they have lots of them.  And so we think that this is incredibly relevant to bring hyperscale economics shared power. But without going to like a DC bus bar where suddenly you have to hire very special technicians to be able to go near your servers or changing your rack design. This is standard rack standard infrastructure. But you're basically giving yourself a much more efficient way to deploy hyperscale technology—even if you just have five data centers with five racks each in one of your core corporate campuses or something. So we think it's very very relevant to enterprises and service providers as well.

All right. You are ‘communitzing’ the advantages of Facebook and Amazon.  

Yes what we're doing is making all that hardware innovation that has been built specifically for a hyperscaler and bringing it down a notch to people who buy 500 to 10,000 servers a year.

Amazing. OK very good. So what is the Open19 website link? 

Yes. So it's www.open19.org. You can go there. Register as an individual or end user member for free. You can see summit presentations, a collection of the products from the different vendors who are already supporting and making and shipping Open19 compatible infrastructure today.

And then it's worthwhile saying that any existing server in terms of a 19 inch standard server—whether it's 1/2 width, full width, 1U or 2U—can be modified to fit in an Open19 brick. We've built it so that the dimensions are exactly the same as your standard 19 inch pizza boxes or 2Us or anything else like that. So if you're interested, get in touch with your server and manufacturer and see how they can modify or work with one of the manufacturers to modify their solution into an Open19 form factor.

OK very good. Thank you very much again for your time today Zach and where can we stalk you on the Internet? Do you have a Twitter account or something like that to continue that conversation? 

Absolutely. So you can find me on Twitter. I'm @ZSmithNYC, @Packethost for my company, and I would love to take any questions or ideas or thoughts about what we're doing around microservers, how we think around bare metal compute, and if we can help you adopt Open19 in your own enterprise.

Great. OK. Thank you very much. And bye bye. 

Thanks Enrico.

Interested in sponsoring one of our podcasts? Have a suggestion for a great guest? Please contact us and let us know.