The term “in-memory” began grabbing headlines a couple years ago, with vendors promising greater processing and analytics speeds. RAM is undoubtedly faster than disk – about 3000X faster, generally speaking – and a flood of in-memory products have hit the market.
Even though RAM is far more expensive than disk, its costs have dropped dramatically, declining at roughly 30% per year. The speed factor and the drop in price have led many organizations to consider focusing on holding data in memory for processing, and using disk only where necessary.
If disk is the new tape, and memory is the new disk, then what of chip?
We learn in elementary school that the shortest distance between two points is a straight line. So obviously, the shorter the line, the faster the distance can be travelled. This basic principal helps explain why in-chip processing crushes in-memory processing by a few orders of magnitude.
Anyone who is intimately familiar with the x86 chip will know that it includes three layers of cache: L1, L2 and L3, each with its own capacity, and totaling up to roughly 20Mb. Now, no one is suggesting that all data be held in-chip, but we are suggesting that in-chip processing can be considered an additional super fast and entirely beneficial resource.
A handful of vendors are leveraging in-chip technology, such as Actian, IBM and VelociData. As far as we know, however, there is only one Business Intelligence product that does this: SISense Prism.
SiSense Prism: From Fast to Faster
On the surface SiSense is a BI product that enables users to visualize data and act on it. Think of QlikView or Tableau, and you get some idea of the functionality. However, what SiSense does technically is, in our view, extraordinary and impressive. It optimizes the data flow from disk to the CPU cores by creating a “memory map” of the current location of all the data. This allows it to accelerate data from disk through memory to the chip. In the chip, SiSense Prism applies a vector algebra to the data, allowing it to execute x86 vector instructions. This means that the CPU cores can process many data values (think columns of data) in single CPU core instructions.
SiSense Prism’s data engine is called the ElastiCube Server. It is a columnar database, which applies super-scalar data compression. This allows it to retain data in a compressed state until it reaches the chip, and data is only decompressed once it is in L1 and L2 cache. When coupled with the use of vector instructions this is remarkably fast – much faster than vanilla in-memory processing. The ElastiCube engine also optimizes performance by learning which queries are frequently reused. It does this via learning algorithms that pre-load the result sets, in a compressed state, into L1 Cache.
By our estimation, SiSense Prism’s exploitation of in-chip cache resources makes it possible for it to reach speeds that are 100X faster than in-memory products. From a business intelligence perspective, this equals greater time to insight and action.
Turning Data into Knowledge
This is not just about speed. It is also about enabling BI from data access to execution. The SiSense Prism BI suite allows users to tap into disparate and heterogeneous data sources, including Hadoop, to leverage actionable intelligence. Its suite is comprised of ElastiCube Manager (a data provisioning tool for data experts) and BI Studio (a dashboards and visualization tool designed for the business user). Results can be published out to Prism Web for real-time displays and consumption.
So, aside from being lightning fast, SiSense Prism delivers an end-to-end BI capability. From data ingest to organizational insight, SiSense Prism provides the processing speed, resource management, integration and scale to serve up a unique self-service environment.
Although SiSense is a fairly recent start-up, it is already turning heads for its processing power, analytics capabilities and ease of use. It seems to be traversing a short, straight line to prominence in the BI arena. In our view, it’s worth taking a look.