Go to Top

Will In-Chip Processing Supersede In-Memory?

The term “in-memory” began grabbing headlines a couple years ago, with vendors promising greater processing and analytics speeds. RAM is undoubtedly faster than disk – about 3000X faster, generally speaking – and a flood of in-memory products have hit the market.

Read Robin Bloor’s White Paper: The Phenomenal Speed of In-Chip Analytics

Even though RAM is far more expensive than disk, its costs have dropped dramatically, declining at roughly 30% per year. The speed factor and the drop in price have led many organizations to consider focusing on holding data in memory for processing, and using disk only where necessary.

If disk is the new tape, and memory is the new disk, then what of chip?

In-Chip Processing

We learn in elementary school that the shortest distance between two points is a straight line. So obviously, the shorter the line, the faster the distance can be travelled. This basic principal helps explain why in-chip processing crushes in-memory processing by a few orders of magnitude.

Anyone who is intimately familiar with the x86 chip will know that it includes three layers of cache: L1, L2 and L3, each with its own capacity, and totaling up to roughly 20Mb. Now, no one is suggesting that all data be held in-chip, but we are suggesting that in-chip processing can be considered an additional super fast and entirely beneficial resource.

A handful of vendors are leveraging in-chip technology, such as Actian, IBM and VelociData. As far as we know, however, there is only one Business Intelligence product that does this: SISense Prism.

SiSense Prism: From Fast to Faster

On the surface SiSense is a BI product that enables users to visualize data and act on it. Think of QlikView or Tableau, and you get some idea of the functionality. However,  what SiSense does technically is, in our view, extraordinary and impressive. It optimizes the data flow from disk to the CPU cores by creating a “memory map” of the current location of all the data. This allows it to accelerate data from disk through memory to the chip. In the chip, SiSense Prism applies a vector algebra to the data, allowing it to execute x86 vector instructions. This means that the CPU cores can process many data values (think columns of data) in single CPU core instructions.

SiSense Prism’s data engine is called the ElastiCube Server. It is a columnar database, which applies super-scalar data compression. This allows it to retain data in a compressed state until it reaches the chip, and data is only decompressed once it is in L1 and L2 cache. When coupled with the use of vector instructions this is remarkably fast – much faster than vanilla in-memory processing. The ElastiCube engine also optimizes performance by learning which queries are frequently reused. It does this via learning algorithms that pre-load the result sets, in a compressed state, into L1 Cache.

By our estimation, SiSense Prism’s exploitation of in-chip cache resources makes it possible for it to reach speeds that are 100X faster than in-memory products. From a business intelligence perspective, this equals greater time to insight and action.

Turning Data into Knowledge

This is not just about speed. It is also about enabling BI from data access to execution. The SiSense Prism BI suite allows users to tap into disparate and heterogeneous data sources, including Hadoop, to leverage actionable intelligence. Its suite is comprised of ElastiCube Manager (a data provisioning tool for data experts) and BI Studio (a dashboards and visualization tool designed for the business user). Results can be published out to Prism Web for real-time displays and consumption.

So, aside from being lightning fast, SiSense Prism delivers an end-to-end BI capability. From data ingest to organizational insight, SiSense Prism provides the processing speed, resource management, integration and scale to serve up a unique self-service environment.

Although SiSense is a fairly recent start-up, it is already turning heads for its processing power, analytics capabilities and ease of use. It seems to be traversing a short, straight line to prominence in the BI arena. In our view, it’s worth taking a look.

 

Robin Bloor

About Robin Bloor

Robin is co-founder and Chief Analyst of The Bloor Group. He has more than 30 years of experience in the world of data and information management. He is the creator of the Information-Oriented Architecture, which is to data what the SOA is to services. He is the author of several books including, The Electronic B@zaar, From the Silk Road to the eRoad; a book on e-commerce and three IT books in the Dummies series on SOA, Service Management and The Cloud. He is an international speaker on information management topics. As an analyst for Bloor Research and The Bloor Group, Robin has written scores of white papers, research reports and columns on a wide range of topics from database evaluation to networking options and comparisons to the enterprise in transition.

Robin Bloor

About Robin Bloor

Robin is co-founder and Chief Analyst of The Bloor Group. He has more than 30 years of experience in the world of data and information management. He is the creator of the Information-Oriented Architecture, which is to data what the SOA is to services. He is the author of several books including, The Electronic B@zaar, From the Silk Road to the eRoad; a book on e-commerce and three IT books in the Dummies series on SOA, Service Management and The Cloud. He is an international speaker on information management topics. As an analyst for Bloor Research and The Bloor Group, Robin has written scores of white papers, research reports and columns on a wide range of topics from database evaluation to networking options and comparisons to the enterprise in transition.

3 Responses to "Will In-Chip Processing Supersede In-Memory?"

  • Shivani
    July 16, 2013 - 3:57 am Reply

    Good article for providing the overiew. Thanks Robin.

  • Rob Klopp
    July 16, 2013 - 2:33 pm Reply

    Hi Robin,

    I think that several vendors have vectorized the columns in a column store and then process them using the AVX (vector) and SIMD instruction sets available on Intel processors. These include at least HANA, BLU, Vertica, Vectorwise, and Sybase IQ. The approach is well understood and in the roadmap for several other DBMS products, I believe.

    What is unique about Si Sense is that that have used the QlikTech approach of marrying DBMSish technologies directly to a BI tool. This may be cool? The notion of in-chip being better than in-memory is also a unique spin.

    – Rob Klopp

    • Robin Bloor
      robinjamesbloorgroup
      July 16, 2013 - 5:28 pm Reply

      Thanks for the feedback, Rob. I wasn’t aware of the Vertica or Sybase IQ using vector instructions.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>