Inside Analysis

Flexibility in the Cloud: Customer-Defined Computing

In an interview with Bloor Group CEO Eric Kavanagh in mid-November 2014, CloudSigma CEO and Co-Founder Robert Jenkins explains how the company originated when Jenkins and his co-founder, Patrick Baillie, were experimenting with different public cloud technologies in 2008. While they both recognized the benefits of the “as-a-service” model, they also noticed some major limitations. Specifically, they felt that this approach to computing required giving up a lot of infrastructure control. Because they recognized that most companies have unique technology requirements, they decided to start CloudSigma to address those shortcomings in the public cloud market.

 

Eric Kavanagh: Ladies and gentlemen, hello and welcome back once again to Inside Analysis. This is Eric Kavanagh, and we’re talking today with Robert Jenkins, CEO and Co-founder of CloudSigma. He’s calling in from Bulgaria. Robert, welcome to Inside Analysis.

Robert Jenkins: Hi, it is good to be here today.

Eric Kavanagh: CloudSigma is a very interesting company. I met Michael Higgins, who is a senior manager with your team, at Oracle OpenWorld a couple years ago and was very impressed with his knowledge and understanding of the technologies that are out there in the space today. And he told me about CloudSigma, and I found it very interesting and compelling. My understanding is it’s basically private cloud infrastructure. Can you talk about what it is that you do at CloudSigma and where the idea came from?

Robert Jenkins: Yes. Along with my co-founder, Patrick Baillie, back in 2008, we started using the public cloud and experimenting with it as a new technology. And while there were some very clear benefits to infrastructure delivered in that new model, the sort of “as-a-service” model that’s essentially outsourced from your own infrastructure, what we found very quickly was that there were some negatives. Specifically, you were losing some things that we very much appreciated about our own private environments that we were involved with and managing at that time. My co-founder was working for a bank in Switzerland at the time.

What we found was that we were losing a lot of control over our infrastructure. We weren’t able to size the resources as correctly as we wanted. We didn’t have the same level of performance. We couldn’t quite get the same level of networking sophistication that was normal for most companies of any decent size. We also didn’t have the same level of control over the kind of software that we might want to use. We were sort of given a fixed menu.

And fundamentally, we think that computing is very heterogeneous. If you walk around a data center, you will see that almost every rack of servers is different, and that’s because they’ve been purchased by their owners to meet specific requirements. Every company is a little bit different, and we didn’t feel that the cloud somehow fundamentally changed that.

What we wanted to do was give something that was analogous to the idea of a virtual data center. Customers could come, benefit from public cloud and its benefits, such as elasticity, the ability to manage equipment in all different geographies of the world from your own location, transparency of cost, those kind of things. But at the same time, keep the things that you like about your private environment: being able to control it, configure it and do so very accurately. That was the genesis of the idea and vision behind CloudSigma. It was to bring this sort of virtual data center approach to the public cloud.

Eric Kavanagh: I think that’s a great idea, because as I look to the cloud, and I enjoy the cloud benefits every day in all kinds of different ways. But like you, I’m not a big fan of constraints. I don’t like to be put into a box or pigeon-holed. Now that you have identified this opportunity in the marketplace, one of the questions is: how do you manage the needs of such a heterogeneous set of clients and technologies?

Robert Jenkins: That’s absolutely a challenge for any public cloud, and the good news is that a lot of technologies have developed over the last two to three years that make it a much more realistic proposition. The three areas that contribute to customers’ performance are networking, storage and compute. On the networking side we now have software-defined networking, and we actually use software-defined networking in our cloud. That allows us to very easily separate and control dynamically the way the network for the cloud is working.

As you can imagine, we don’t really know what the traffic flows will be from one hour to the next within our cloud. So we can’t tell you or your engineers that a certain rack is going to receive this type of traffic. The reason is our customers are bringing infrastructure up and down dynamically all the time. Software-defined networking has gone a long way toward being able to address that issue and enabling the network to be organic so it can be react to the requirements.

On the storage side, we actually use all SSD storage. It can deal with the more random access profiles that, again, a multi-tenant environment like a public cloud generates. By doing that, we’re able to guarantee and allow customers to dial in the level of storage performance that they want.

On the compute side, through things like live migration and having better and more advanced resource allocation algorithms, we’re able to manage the load of the cloud on the physical CPUs and are able to shift loads around on our own infrastructure. We can then have a view to give a straight-line, predictable performance level to customers.

Eric Kavanagh: That’s really amazing stuff. If I can kind of simplify what you’re talking about, it’s basically the next version of what virtualization brought us several years ago. Now it’s been applied to other areas of the stack and the environment so you can have a thoroughly fluid, dynamic and virtual data center in order to be able to accommodate a whole range of different needs and spikes and valleys, right?

Robert Jenkins: Yes, absolutely. And that’s a work in progress in terms of becoming more refined. But our ultimate goal is to achieve something that we call customer-defined computing, which means that the customer can come in — and you can imagine any virtual machine (VM) having something like a graphic equalizer associated with it, which would be your CPU storage and networking performance — and is able to dial in that virtual machine for its specific requirement on a VM-by-VM basis.

So on one virtual machine you might have very high storage performance. You could dial that in. Of course, you pay a little bit more for your storage, but on the other hand, maybe you don’t need as much networking, so you can dial that back. The idea is you can communicate to us as a cloud provider what your requirements are in a qualitative way, not just a quantitatively.

At the moment, you can define a certain server size, but you don’t really have the ability to define the quality. And of course, when you buy something like a physical server, you get to choose those things when you buy the hardware. The idea that we’re moving toward is this customer-defined computing, which is driven by those sort of technologies. It’s really trying to achieve in the virtual world the same sort of qualitative differences that you can achieve in the physical world. But of course, the challenge is that it’s actually sitting on the same physical hardware.

Eric Kavanagh: That’s very interesting. One question I’ve had for a long time about this model and similar models, and just about the enterprise software licensing model itself, revolves around pricing. Specifically, pricing for software. And we see this whole dynamic changing. For example, Adobe has moved to monthly subscription charges instead of just selling you something like Photoshop, which is a huge change. It’s a fundamental change, and I think it’s kind of a straw in the wind for where things are going. How do you guys deal with the licensing costs of proprietary software, which is often built around the number of processors or amount of data? How does that curve ball get handled by you guys?

Robert Jenkins: Just stepping back a second, and then I’ll jump into the software licensing, because it’s directly relevant. The way we built our billing system is as a utility style platform. When you get rid of individual service sizes and drives and similar things, you can’t have a price list that’s based on a larger server or a medium server. The reason is you have an almost infinite world of possibility in terms of sizing. We instead went to a utility type approach where we say; what are you actually consuming in terms of resources? So it’s very much like your electricity bill.

Your electricity company doesn’t need to know what fridge you have and what microwave, they just care about how much you’re consuming in terms of your electricity. It’s very similar for us. We just say, let’s look at the underlying resource consumption. By doing that, we build a very flexible system that has an abstraction layer around the resources that you run. It’s also a very customer-friendly model because it means that you can size resources flexibly and get billed for them flexibly. You also have very accurate purchasing without having to pay for things you don’t use.

When it came to software licensing, we wanted a similar approach. One of the challenges with software is there are many different software companies and they all have their own approaches. So we built a framework that has all these degrees of freedom built in. We built the framework around our existing pricing and billing model with those same degrees of flexibility.

For example, we have Microsoft data center licensing for the data center server that runs on a per-core basis and you can buy subscriptions per core for those licenses. If you have a VM that’s sized with 10 cores, you can then go and buy 10 cores of that type of software. And if you have two servers, you can buy 20 and so on.

You’re able to manage those licenses, but we’re also able to manage many other different reference points, including a per-install model; we can even do per gigabyte of RAM, per processor. There are many, many different models that we can do. And we expose those as a subscription that you can buy at a time, or you can actually use them on burst if the licensing company allows you to do that.

In the case of Microsoft, you can actually spin up Windows Server, which is obviously a proprietary software that you have to pay a license for, and we will actually bill you; if you don’t buy a license in advance, we will bill you on burst in five-minute rolling segments. You don’t really get much more flexible than that. This means you can use your Windows environment in an elastic way. So it can be reactive to the load of the service that you’re running, driven from a Windows Server perspective.

It’s important to have a good licensing system because it allows customers to run their services in a different way and it gives them more choice. And so I think that the answer is that the cloud provides an abstraction layer, and it’s important to have a good framework that essentially gives the choice to the software licensing company to be able to reflect what it is and how they want to bill.

Saying that, there are of course better ways and more cloud-friendly ways to build software and the way it can be licensed. We definitely see the industry moving in that direction. But on the other hand, it’s still important to have the ability to incorporate legacy type pricing models that some of these companies still use.

Eric Kavanagh: I think that’s a really clever strategy. It seems to me that it’s going to provide a nice middle ground or transitional period through which we can move into much more of a sensible environment. Because from a CIO’s perspective, it must be incredibly frustrating to navigate through a whole range of different pricing models and service level agreements.

I believe that a lot of that stuff doesn’t really get managed; it just gets billed. And I think that sooner or later, we’re going to have a situation where large organizations are going to realize they cannot be wasting money on licenses to software that they’re not using. Do you think that’s a trend that’s coming down the pike right now?

Robert Jenkins: Yes, the cloud definitely delivers transparency around cost. In our case, we have a transaction log to the account. While we have a running balance, there is a transaction log with a five-minute billing interval. You can literally go and see the costs coming in. When you see every five minutes that you’re being charged a few dollars if you have quite a few servers for a license and you realize you don’t really need to use that, it sort of puts it directly in your face that you’re wasting money. You’re literally throwing a $10 bill out the window every five minutes.

That would be one way some people are wasting money and the transparency sort of shines a light on all these things. That’s a challenge for the software industry to address. In the cloud, we see elasticity in things like this, and you can do a lot of efficient actions that save you money.

I’ll give you an example of why flexibility matters in the cloud and why we built our product around that. Let’s use the example of an Oracle Database 11g, which is a very typical enterprise database that many customers use all over the world. It has a per-core license. So if you’re in a cloud that has a fixed virtual machine size, even if you have a heavy RAM instance, you essentially have a certain amount of CPU. When you get RAM-limited on that machine, which is normally what happens on a database as it grows, you want to upgrade that virtual machine and you want a few more gigabytes of RAM.

In the case of most clouds, you tend to have to re-provision into the next size up virtual machine, which just happens to additional CPU cores. Even if it’s less than the RAM, it still has more. So now you’re paying for a machine which didn’t need more CPU and you’re now paying for additional license fees. There are a lot of inefficiencies that can creep in with some of these less flexible models.

In our case, you would simply re-size the virtual machine by increasing the RAM, leaving the CPU alone, and you would only be paying for the additional RAM. There would be no additional license fees. So it’s important to consider how you’re paying for your licenses when you’re looking at who’s going to be providing your infrastructure, because there’s a direct link there. And if you’re into a much more inflexible provisioning model, that can cost you a lot of money on the licensing side.

In many cases, the licenses are more expensive than the infrastructure. The money that people are paying to Oracle and Microsoft exceeds the cost of the actual resources they’re using underneath. So going through a cloud is for nothing if you’re actually becoming less efficient or you’re not gaining what you could be gaining from efficiencies around software licensing.

Eric Kavanagh: I have to think that you’re learning a tremendous amount as you bring in new clients, specifically around the networking side of the equation with this software-defined networking. Can you then analyze what happens when the peaks and valleys occur and identify new ways to streamline those processes?

Robert Jenkins: Yes, absolutely. The challenge for us is we use the same infrastructure for different people doing different things. We also have quite demanding customers. Literally this morning, I was speaking with a customer and they were doing some networking between their virtual machines and they told me that they’d only be able to achieve 3.6 gigabits. I know from our benchmarking of other clouds that they would not achieve even 1 gigabit on other platforms, but we were like, “OK, OK.” And so now we’re working with them and our operations team to see how we can increase that even more, and often its settings within their own operating system.

We have very a collaborative approach. That’s the job of Michael Higgins, the gentleman you mentioned before whom you met. He works with customers, because when they move to the cloud they have new opportunities in terms of what they’re doing. I mentioned at the beginning the idea of our cloud is that you don’t need to change what you’re doing. And while you don’t, that doesn’t mean that you can’t benefit from doing something differently. The idea is to ease the transition, but then over time, as you become more comfortable and have the resources, you can start to look at ways of improving how you’re doing things to take advantage of some of the things the cloud can do.

What we do is have people who are there as a free resource for our customers. They can understand what is it you’re trying to achieve and then reflect that back in terms of our own knowledge of our systems and the cloud. Because when you’re going to something like the public cloud, you’re outsourcing that infrastructure bit. For us, it’s very important that the client has lines of communication so they can understand and achieve the things they want to be able to do within the platform, and not for it just to be a sort of a blank face where they can’t really interact with the company itself and it’s just presented as a pre-packaged product. That’s very much part of what we do.

You’re also right that as we work with different customers and different needs, we’re always learning about new requirements. For me, I’m interested in the end use cases and that’s one of the things that makes my job very interesting is I get to meet a very eclectic group of companies. From CERN and their particle physics to people doing Big Data to people doing just web services, it’s really interesting to discover all the things that happen. Then sometimes when I get to the data center and have a look at our racks, I sort of stare at the racks and can only imagine what’s actually going on inside, because I really know some of the customers we have and they’re all stacked within that same infrastructure. It’s kind of mind-blowing to think that that’s all happening in that small physical space.

Eric Kavanagh: Let’s wrap up with some discussion about some of the coolest things that you guys are doing. I know there are some projects where you’re dealing with information about the stars; can you talk about that?

Robert Jenkins:  Yes, absolutely. This came about through our initial relationship with a consortium that was created by a number of scientific institutions in Europe. We’re also engaged with institutions all over the world, including the U.S. But it’s concerning what’s called the Helix Nebula. And they, CERN, who I think most people know; the European Space Agency, ESA for short; and the European Molecular Biology Lab, which is a top five genetic research facility in Germany, all came together because they have these challenges around data and around computing in creating Helix Nebula.

As we started to engage with them, and in particular, the European Space Agency, what we realized is that one of the previous challenges they’d been addressing was access. They wanted to give people access to all this data. So in the case of the European Space Agency, they have some of the world’s most advanced satellites that do Earth observations. These are satellites that look down onto the Earth. They also have satellites that look up (we saw the comet landing this week) and they also have a lot of satellites looking at Earth for monitoring. This has a huge importance for climate change, agriculture and many other industries.

Some of their data is what they call data rain. It’s all this data coming down from a satellite and it can literally be terabytes a day. One satellite called Sentinel-1 that was put up earlier this year from the European Commission covers the Earth every nine days. It has a very small aperture, so it does it lots of little strips every 45 minutes. If you can imagine peeling an apple, it does that every nine days and produces this time series data, which is incredibly powerful. But every day, this single satellite produces one terabyte of data. Imagine having more satellites and having data not just from the European Space Agency, but from NASA or from UNESCO, who have an oceanographics survey project, and many others; it starts to get really big.

Even if you theoretically could get access to it, unless you happened to have a few petabytes of storage lying around, you really can’t use it. Since even the biggest institutions don’t tend to keep that amount of space, what we’ve been working with the European Space Agency and others is the concept of using the public cloud as a sort of coordination mechanism. So rather than everyone having to store their own copy of that data, we can put the copy of the data in our cloud, and we can give everyone read access to it, and then you don’t actually need to keep a copy.

The great thing is you can use that data from many different sources, so you can combine the European Space Agency data with other data, and you can build new services. Because it’s a public cloud, you have the computing resources available on hand to do that. This is the concept of what we call a data ecosystem, and we actually have a number of active projects that we’re working on that are doing this. The idea is that you can build on top of this really great data that’s coming from these satellites and all these very interesting services that can be done.

I can give you a couple of examples because it’s extremely eclectic, but it’s also very valuable. With the Sentinel-1 data, every nine days it has the entire Earth’s surface recorded to five centimeters, so it’s like a couple of inches granularity, the entire surface. You could use this to measure the volume of the Greenland ice sheet, and you could do it every nine days, and then you would be able to tell how much ice is melting or being formed at any one time.

That’s pretty important if you are interested in understanding the rate of climate change or lack of climate change depending on your opinion. That data is there; it is sitting on servers and the European Space Agency is receiving this data. What we’re trying to do as a public cloud is get that data out to people, scientists and industry so they can actually leverage it.

You can also have monitoring for commercial purposes, for agriculture, for pipelines or anything else. What we’re trying to do is unleash the value from these public data sets, which have become too big for individuals or even companies to handle themselves, and combine the concept of public cloud with these data sets to bring them together. We’re able to subsidize the storage of those data sets because of the other business that can be generated around that. It’s a sort of win-win, risk-sharing type model with these institutions.

Eric Kavanagh: I think it’s a fantastic idea and you are really on to something. CloudSigma.com is where someone can go for more information. Can they also find more info about these data ecosystems you’re talking about?

Robert Jenkins: Yes, our blog is the best. There’s a blog section if they go to CloudSigma.com or they can go directly to CloudSigma.com/Blog. We post regularly about the data ecosystems. People can also feel free to contact us if they have a specific interest in the data. Because we liaise with the data scientists at the European Space Agency and other institutions, we’re all always interested in different use cases so we can provide more than just straight access. We like to put people in touch with each other and help them with whatever they’re doing in terms of leveraging this data.

Eric Kavanagh: That’s fantastic, congratulations. You are really on to something. Folks, we’ve been talking to Robert Jenkins, CEO and Co-founder of CloudSigma. Thanks for doing the show!

Robert Jenkins: It was my pleasure, thank you.

 

Leave a Reply

Your email address will not be published. Required fields are marked *