Inside Analysis

HPE’s Haven at the Big Data Conference

Last week was HPE’s Big Data Conference. It was well worth attending, as it turned out, and not just for conversations with HPE engineers and the plethora of Big Data case study presentations. HPE had scheduled two significant announcements for the show. One was a new version of Vertica and the other, which truly sparked my interest and which I regard as far more significant, was the official release of Haven.

So What Is Haven?

You can find out easily enough by visiting www.havenondemand.com, signing up and playing with the capability provided. That’s exactly what I did, but not before spending an hour or so at the Haven booth in the conference exhibition center. Haven is HPE’s public-facing analytics platform for both developers and users. It has a host of useful capabilities that you can employ and use as “LEGO bricks” to build fairly sophisticated applications and work flows, including: image recognition, optical character recognition (OCR), face detection, text analysis, concept extraction, language identification, machine learning, sentiment analysis and a complete set of text analytics capabilities.

You can load data into Haven, then “plug and play” with its capabilities. It’s free up to a defined limit (1000 API calls within a month). And if you break that limit, as you are likely to do if you build something serious with it, you pay a very reasonable monthly rent.

You don’t need to be a developer either – it really is a matter of connecting the output of one module to the input of another. However, if you are a developer, you can plug the Haven components into your own applications. I won’t say much more about it – if you are interested you can go to the website and try it out – but I will say this: there are now a vast array of libraries, particularly in the Big Data area, that provide excellent capabilities. Many are open source, written in R or Python, and you can exploit them, too.

Several things make Haven different. It’s an easy environment to use, it’s inexpensive, it’s cloud-based and, perhaps most importantly, HPE has curated every module it contains, building each one so that it fits seamlessly into the Haven environment.

Schematiq

Schematiq was one of the companies showcasing their technology at the HPE conference. They attracted my attention because they offer a very distinctive capability. They provide a development platform that is tailored primarily to use Microsoft Excel as a front end. It is a platform that can connect to data sources and embed executables from any language (and it also includes all the capabilities of HPE’s Haven platform). You can think of it as an antidote to the all-too-common spreadsheet nightmare. Excel is a wonderful tool with a remarkable amount of capability – that’s the upside. The downside is that it has been used extensively to create semi-manual silo applications, often with untested or poorly tested capabilities. Schematiq pulls off the double trick of providing useful and usable integration for spreadsheet capability at the desktop level, while also adding in the power of server applications (including, if you like, Hadoop or Big Data warehouse capabilities).

The company is a fairly recent startup that, as far as I know, has no direct competition. I expect it to do well, in the financial sector at least, where banks, brokers and insurance companies are awash with spreadsheets. But they will probably do well in other sectors like health care and retail where semi-manual spreadsheet activity is both a boon and a bane.

Talena

Finally I thought I’d give a mention to Talena, another company I ran into for the first time at the conference. I was waiting for a backup company to emerge in the Big Data space. If you thought the need for backup somehow evaporated with the advent of Hadoop, it’s probably because you don’t really understand backup. It’s alive and well and nosing it’s way into the Big Data world. Talena provides flexible Enterprise Back-up and Recovery. If you want to know more, go visit talena-inc.com.

Robin Bloor

About Robin Bloor

Robin is co-founder and Chief Analyst of The Bloor Group. He has more than 30 years of experience in the world of data and information management. He is the creator of the Information-Oriented Architecture, which is to data what the SOA is to services. He is the author of several books including, The Electronic [email protected], From the Silk Road to the eRoad; a book on e-commerce and three IT books in the Dummies series on SOA, Service Management and The Cloud. He is an international speaker on information management topics. As an analyst for Bloor Research and The Bloor Group, Robin has written scores of white papers, research reports and columns on a wide range of topics from database evaluation to networking options and comparisons to the enterprise in transition.

Robin Bloor

About Robin Bloor

Robin is co-founder and Chief Analyst of The Bloor Group. He has more than 30 years of experience in the world of data and information management. He is the creator of the Information-Oriented Architecture, which is to data what the SOA is to services. He is the author of several books including, The Electronic [email protected], From the Silk Road to the eRoad; a book on e-commerce and three IT books in the Dummies series on SOA, Service Management and The Cloud. He is an international speaker on information management topics. As an analyst for Bloor Research and The Bloor Group, Robin has written scores of white papers, research reports and columns on a wide range of topics from database evaluation to networking options and comparisons to the enterprise in transition.

Leave a Reply

Your email address will not be published. Required fields are marked *