Bloor Group co-founders Eric Kavanagh and Robin Bloor sat down for a chat on February 7, 2015 to discuss the Linux operating system and the company responsible for the success of Linux in the enterprise: Red Hat.
Eric Kavanagh: Ladies and gentlemen, hello and welcome back, once again to Inside Analysis, the eponymous show here at The Bloor Group and InsideAnalysis.com. I’m talking today with our Chief Analyst, about what are the foundations of enterprise software today, which is really pretty interesting stuff. We’re going to talk about Linux and the standard bearer for enterprise Linux, a company called Red Hat. First of all, Dr. Bloor, welcome to the show!
Dr. Robin Bloor: Good to talk to you, Eric.
Eric Kavanagh: Absolutely. I remember learning about Linux and Red Hat a good number of years ago. When I’d started digging into some research about Linux, it dawned on me very quickly what a big deal it was to have this alternate operating system to compete with the existing ones back then, mainly Microsoft. Of course, the investment that IBM put into Linux to harden it, to make it an enterprise-ready platform worked because Linux is everywhere. Even these new Hadoop vendors are using Linux. Of course Hortonworks has their Windows platform, but the original Hadoop was on Linux, it was designed for Linux, and most enterprise applications run on Linux. I’d have to say, that experiment was pretty successful, right?
Dr. Robin Bloor: Oh right, indeed. Well if you look at what actually happened, it’s kind of an interesting picture. Linux was invented in the early 1990s by Linus Torvalds. What changed the industry was that Microsoft was growing at a tremendous rate and IBM at the time had the S400 platform, it had AIX and it had the mainframe.
In the face of what was happening with Windows because Microsoft was making it more and more powerful and capable of running on larger and larger servers and in larger environments. All of the software development electricity had gone to Windows. There was this thing called Linux rising up and it was this open source thing. Well the guys at IBM, if what was told to me is correct, sat around and thought “unless we find a platform where software will be developed that runs on our boxes, it’s all going to go to Microsoft.”
That’s the IBM investment in Linux and I think they did exactly the right thing. They could’ve tried to own Linux, but I think if they had it would not have been successful. The world had already, if you want, rejected the OS/2 that IBM had owned, so you know, there was Linux. IBM’s investment in it probably created Red Hat. It was I think 1998 that Red Hat was formed and it was really on the back of the idea of two things. One is that their coders would continue to build on Linux but they’d harden it to the enterprise. The other thing was that the revenue would be the original open source model, which is that they will get revenue out of support and they will offer the product for free.
From business perspective, it was a very courageous thing to do because no one had ever demonstrated that that could work. Insofar as there’d even been an open source success, there was always something else attached to it that made it successful. The Firefox Browser, for instance, had deals with the search engine people in order to generate revenue. Well, Linux never had anything like that it could guarantee would generate revenue. Creating a company that was based on open source with the hope that it would grow, and if you look at the market now, I mean Red Hat is worth $12 billion. That’s not to be sniffed at. That’s a very, very serious software company. Especially when you think that very little of that is actually license revenue.
Eric Kavanagh: Wow. I remember as someone who was not a programmer, but someone who coded back in my early teens, wrote a couple programs for fun and learned HTML and other basic kind of programming languages, but never really delved into it professionally just looking at the number of Windows operating systems available and I’m thinking to myself, that’s a problem.
I think a lot of enterprise software companies were finding that these new Windows editions would come out and there would be changes and they would have to make changes to understand the new operating systems. I think that’s what really opened the door for Linux to serve as a very viable foundation for enterprise computing. Does that make sense?
Dr. Robin Bloor: I don’t want to be unkind to Microsoft, but I think Microsoft could’ve done a better job with the various operating systems because at a certain point in time it was fairly confusing. There were a number of things that Microsoft did that were going to act as a drag upon Windows. I mean the intention to always have backward compatibility – which is a good intention, I’m not criticizing that – but it actually slowed the development of Windows down to the point where eventually Windows had to start behaving in a proprietary manner so that Microsoft was building its own database on Windows as well as Its own transaction managers on Windows and its own middleware on Windows. It started to become not an operating system for general use, but much more an operating system that you adopted if you decided that the Microsoft technology that could go with it was exactly what you needed.
A lot of people did. I have to say that, but you see, it’s a very different perspective to Linux and the commodity server situation. Linux didn’t only have one competitor with Windows, there was Solaris as well. For a period of time, Solaris was a dominant Unix operating system. I remember a time when people didn’t even think who they were going to buy Unix hardware from, they just bought it from Sun because it was going to be the richest environment.
Well Linux eventually saw Solaris off, but the thing you could do with Linux and the thing that was quite compelling about it is you could load it on virtually anything. It’s the most portable operating system that’s ever been written. It had a very small footprint. You could load it with very little memory. Not a very powerful CPU could load it on any kind of old PC you’d got. You could use old PCs as Linux servers if you really wanted to. Then there was the fact that the whole of the open source movement, which had been pretty rudderless at the time, all fell in behind Linux, so everybody in the open source movement for a period of time was developing everything for Linux.
Dr. Robin Bloor: It just became the natural platform, and that’s before Red Hat did a lot of stuff. I mean you’ve got to credit Red Hat with actually guiding the development of enterprise Linux. Linux on a single device – you can probably take any particular distribution that you like the name of to use for that, but Linux for the enterprise, pretty much Red Hat has it.
Eric Kavanagh: You bring up a really good point, which is this whole importance of having a critical mass of developers working for you. Of course, that’s what Microsoft had with Windows for more than a decade. I mean everyone who would write applications would write for Windows because they were the dominant force. I guess it was in maybe the early to mid 1990s, you knew that 85%, 95% of the operating systems out there were Windows. It was a de facto standard, which is a good thing, because now everyone knows what to worry about and where to focus their attention.
That’s why I think Windows maybe started to get a bit too fragmented in its direction and opened the door for Linux, as you suggest, and the Mac OS was still around back then. Of course Mac almost died and was I guess saved by Microsoft, which is one of the most amazing stories I’ve heard in this industry. The point is if you have this critical mass of developers focusing on your platform, you are going to succeed. It seems to me that Linux, largely thanks to Red Hat, has achieved that for the enterprise software world, right?
Dr. Robin Bloor: Yes, and it is kind of curious. If you were to talk about it as being a battle just between Microsoft and Red Hat, and it never was, but if you talked about it you would have to say that on the PC, Microsoft won hands down. I mean Linux never got much of a foothold on the PC. In actual fact, the only thing that did rise up and start to challenge Microsoft on the PC was the Apple OFX . In the server, it’s quite clear that Linux won.
I think that that’s simply because the business model that works for Linux, based upon support, you really can’t sell support to individuals the same as the way that you can to corporations. You’re not buying support just because you want to ring up and ask questions. You’re buying support because you start to do something and all of a sudden you find yourself in trouble, and you want to talk to someone that really knows the code. The kind of support that Red Hat gives isn’t the same kind of support that you get with other software because there’s a fair amount of consultancy and shepherding involved, because it’s open source and because you can mess with the source.
That’s different to what would happen if you were seeking help from, let’s say, Oracle. You want the database to do something and it’s not doing it. Well they’re not going to alter the code base for you or suggest fiddling the code base if that’s the problem. They’re either going to say, “Try to do it another way,” or they’re going to say, “We’ll try and include that in the next release.” It’s a different way of support.
Eric Kavanagh: So it’s much more flexible going this open source route is the key, right?
Dr. Robin Bloor: Oh right, yes.
Eric Kavanagh: And that’s what organizations need.
Dr. Robin Bloor: When people out there that use the software have good ideas, they just return it to the code base. The customers, instead of being the suggesters as they would be with proprietary software, they suddenly become collaborators in the direction of the software. It’s a different model.
Eric Kavanagh: That’s exactly right. If you look at the enterprise, well the one word that jumps to mind is “heterogeneous.” Every large company has all these different applications and all these different business needs. That’s why you want to have agility at the foundation and at that layer above the foundation, which is broadly called middleware. Of course open source has been a big part of the middleware movement for quite some time with JBoss for example, and Red Hat is very active in that space too, right?
Dr. Robin Bloor: Oh yes. When you’re talking about enterprise Linux, you’re talking about an environment that you can depend on that spans maybe hundreds of servers – it could span thousands of servers. There are not many companies that actually have that number of servers, but it certainly could span hundreds of servers. You’re actually looking not just at the operating environment that exists on a given server, you’re now looking at the whole resource base.
A couple of things start to walk into your vision. First of all those servers, if you actually look at what’s happened to servers, are enormously powerful things now. Commodity server which has the latest Intel chip on – let’s say it’s got four of the latest Intel chips on – is a bit like a hundred servers from a few years ago. It’s a remarkable amount of power. Well you have to be able to create virtual machines in order to run small applications if you’re going to do that. The first thing is, can you divide the resource base on a given server? Then the second thing is, of course, if you’re going to have a lot of these servers, can you make the application span from server to server in such a way that the latency doesn’t kill you?
That’s what middleware is for, primarily. I mean middleware came into existence at the moment that client server occurred, because there was a need to connect the client to the server, especially with SQL requests. It just got larger and larger because what is happening is you are now building instead of two tiers systems, two-tier client server, you have multi-tier. About ten years ago, service-oriented architecture (SOA) steps forward and that’s just like a web of tiers. You know, there isn’t any limit, but there is the fact that it’s simply not going to work. If you’re connecting ten different processes together to make something happen, if you have anything except really good low latency between these, it’s just not going to work as an application. It’s just going to be too slow.
Eric Kavanagh: Right. Of course I remember researching SOA in 2005 and 2006, and I thought to myself, this really sounds like it’s going to pose some issues for the traditional enterprise software licensing approach. With SOA, if you have what you would call a pure service-oriented architecture, you should be able to rip and replace services fairly easily. That’s the whole idea if you want much more of an agile environment where you do not have so many of these hard-coded connections between different bits of functionality. You would rather have this “loose coupling,” as they would call.
The issue is that if you want really top-line performance, if you want thoroughly hardened enterprise software, you want to create hard connections. If you can be a little bit loose about things, that gives you the flexibility to be more dynamic in terms of what your environment does. Is that a fair assessment?
Dr. Robin Bloor: Well, yes, it is. It’s a little more complicated than that. Because the original SOA links were created with loose coupling and therefore likely to have a higher latency associated with them. In actual fact, you can be clever at the physical level by loose coupling in the first instance and then dynamically hard coupling as soon as it goes into action. There were architectures that came in that did that at a certain point in time. There were certain things you could do to get around it, but you were adopting the whole philosophy that we will build components and clip them together in order to make it work.
That was actually causing other issues, and that’s possibly worth bringing in here. I mean one of the contributions that Red Hat has made … they didn’t just take Linux and grow the code base, they scalability and for high availability and for higher points. There are lots of add-ons they put in that you actually need if you’re going to make SOA work. What happens if one of the components that is crucial to what you’re doing falls over? How’s it going to suddenly come back into existence? Where’s the architecture that does that? Because the local application that SOA wants, the thing that became the component and so on… They have been able to put it back up in respect to the local use of that service. But the distributed use of that service, it was never built to do that.
You actually had a whole series of architectural issues that rose up around SOA and obviously you needed the operating environment … It’s not just happening because Red Hat did that stuff, but they also did a fair amount of system management capability that enabled you to do this.
Eric Kavanagh: You don’t really hear about SOA anymore, but that doesn’t mean it went away. It seems to me in a very practical way that what happened is that the industry just kind of adopted SOA as a de facto standard. Now the best practices that were evangelized through the SOA movement have simply been absorbed into the manner in which applications are run. A big part of that is cloud computing, right?
Dr. Robin Bloor: Well, yes. A number of things happened. One of the things that happened is that the original attempt to make the interfaces work, which is web services, were superseded by restful connections. That was a better approach to it – that made it easier to build systems that would not have problems associated with them. It didn’t help you much with the systems that were already in existence, of course, but that’s the way it is at the industry.
Then you did get the emergence of the cloud, and the cloud is actually very problematic. There are a lot of positive aspects to the cloud, whether that’s public or private, but the difficulty with the idea of cloud is that you should be able to move data and you should be able to move processes almost with impunity. In other words, you’ve got to have a really virtual environment. Well if you’re going to do that and you’re going to do that with agility … Red Hat’s got KVM as its virtual machine environment. When I did the research on that a long time ago, it looked to me like that was the best thing out there for doing that, you know, ahead of the VMware.
The reality that you can create virtual machines still means that if you’re going to have connections between those virtual machines or what’s running on those virtual machines and other things, you’ve got to manage the virtual network. Instead of managing one network, you’re now managing two. You’re managing the physical network and the virtual network, and you’ve got the power to do that but it requires you build really robust software. I mean there’s no way that you can get away from that.
Eric Kavanagh: Virtualization, it seems to me, has taken the industry by storm now. We’ve seen the mechanism of action, of virtualization, start to penetrate areas like networking, for example, and storage. This is where people talk about things like software-defined storage, where you have a much more dynamic environment as opposed to the rather static and cumbersome environment of old where you had these multiple tiers where tape is on the low end, and maybe spinning disk, and then memory and so forth. You get these tiered structures, but still rather brittle. Whereas with software-defined storage and software-defined networking, you’re moving in this direction of being much more dynamic and being able to handle peaks and valleys much more efficiently. Is that right?
Dr. Robin Bloor: Yes. You know, Red Hat has the goods in storage as well. It can give you a completely virtual storage layer if you want. But the issue that the industry had to overcome above and beyond anything else, is almost what I articulated before. There’s a physical layer and there’s a virtual layer, and the two actually have to map to each other. It’s no good providing software with the ability to look at every place that you could actually store stuff and put whatever it wants anywhere, without you also being aware of what takes time to get to and what can be gotten to fast.
You’ve got your virtual network, which allows you to treat the whole resource base as if it were one storage thing. You’ve got to have some smarts to make sure that stuff that’s required quickly is not put in a place where it can’t be accessed quickly. A lot of smarts go into making that virtualization. It’s not a trivially achieved thing.
Eric Kavanagh: It really is affording companies the ability to tackle larger challenges and not worry so much about the infrastructure costs, because when you have a more dynamic environment you’re able to be a bit more confident that some spike in activity isn’t going to crash your system or some valley in activity isn’t going to crash your budget. Right? Aren’t those the two big concerns that you have with a more static environment as opposed to a software-defined environment?
Dr. Robin Bloor: Well I think that’s exactly right. I haven’t done the analysis, but I was very impressed that you put it that way. Let’s just talk about companies that already have a data center, because companies that haven’t are in a completely different situation. Ones that already have are gradually going to this three-stack kind of situation where there is private cloud, public cloud, hybrid cloud.
There may be very good reasons to keep things in house and there are applications you’re never going to let out of the data center, at least not in this era.
Then there’s a hybrid thing, which is how you manage to get away from the problem that you had when your data center ran out of space and cost you a lot of money. To build a whole new data center costs millions. Just because someone wants another application that’s rather important to the business, you don’t want that to be the one that breaks the data center – the last straw that breaks the camel’s back. Well a hybrid cloud is doing away with that.
That was a crisis in about 2005, 2006. A lot of companies were running out of data center space. Well the first thing was to do the private cloud inside to deploy things a lot faster and then the move to put some stuff out to the public cloud, especially software as a service things that are easy to take on and aren’t critical to the business. Then the hybrid cloud suddenly makes it a very interesting dance because now when you’ve got excess load, it isn’t you that has it, it’s the cloud that has it. You can manage to orchestrate your software to behave that way, but that’s not easy.
It is hard and again, that’s a space where Red Hat has chops.
Eric Kavanagh: Let’s maybe take this as our final topic of the call. It seems to me that hybrid cloud really is the future of enterprise computing, and I just have this inkling that at some point in time in the future, there is going to be I think a fairly significant cloud backlash. I think that’s going to happen because the cloud vendors are going to get so big and any time a vendor gets really big, they start to get a bit slower in terms of how they deal with things. They prefer certain clients over certain other clients just because of relationships, and there are bureaucratic and political issues and that come into play.
It seems to me the companies that really master the hybrid cloud environment are going to be able to respond effectively when some of that starts to happen and essentially take more stuff back on premises using a lot of these virtualization technologies that we’re talking about. Does it make sense that hybrid really is the ultimate avenue forward because it gives you the ability to either go further into the cloud or pull back and retrench on premises?
Dr. Robin Bloor: I would say that you’re probably right. Here’s what I would do if I were in the situation of being a company with a very large data center, or even several data centers, I would sit on hybrid cloud for a long time taking measurements. There are a number of things that are out of your control when they’re in the public cloud, and I would just be taking measurements and measuring the cost is at the same time. There are a lot of things that change. Our industry is one that’s changed again and again and again.
We have no idea what’s going to happen on the CPU. HP is out there in the wings with its memory store. When that is actually pushed into computing, we’ve got no idea what the impact will be because nobody’s ever done it. You can predict the impact in the sense that a number of smart people who know the field can get around and get it, but they nearly always get it wrong. The smartest people in the UK were brought together at one point in time sometime in the 60s, and they were told to look at computing and work out what its destiny was. They came up with the idea that in the future, the industry would gradually taper off and that there would only be five computers needed for the whole world. Well they were wrong.
Eric Kavanagh: That’s funny.
Dr. Robin Bloor: And they really were smart people. Maybe the industry looked like that at the time. Well maybe the industry looks right now like everything’s going to the cloud, but it isn’t necessarily the case. Some development at the level of hardware or at the level of operating software could change it so significantly that the cloud doesn’t look like a good idea anymore. You might as well all take it all home.
Eric Kavanagh: Right. I tell you what folks, it is a wonderful time to be in this business. For those who want to learn more about what’s going on in this space, we’re going to do our own conference in Austin on May 15th. Check it out, it’s called DisrupTech, so D-I-S-R-U-P-T-E-C-H; because we’re talking all about the disruption occurring in the technology space these days and what you and your organization can do take advantage of all that. Red Hat is going to be our platinum sponsor for that show, so they’ll be on hand to give you some more information about all that. Big thank you to you, Dr. Bloor for your perspective. It’s always nice talking to you. This has been another episode of Inside Analysis. Thanks a lot, folks.