Imagine a business intelligence (BI) and analytics platform comparable in its way to Snowflake, the platform-as-a-service (PaaS) data warehousing specialist. A PaaS designed with the pluses and minuses of the cloud in mind: a cloud-native BI and analytics platform. Wouldn’t it be nice?

True, there’s no shortage of cloud-native BI and analytics offerings; however, few of these services specifically address the embedded market. Start-up provider Qrvey is one that does.

And wouldn’t an embedded cloud-native BI-analytics PaaS be especially nice?

“We sort of sit at this intersection of BI and analytics, so the ability to build … dashboards, reports, data visualizations, [and] work with data, but we’re doing so in an embedded way,” explains Qrvey CTO David Abramson. “So we work with companies who want to embed the technology, those capabilities, the software, directly into their existing business processes or even into their own applications.”

Almost from the beginning … there was embedded BI …

This invites the question: what do we mean by “embedded” BI and analytics?

Actually, we mean two things. We used to use the term “embedded BI” to describe software that ran in the background and provided specific functions, or – as with certain kinds of BI front-end tools – software that was exposed in such a way (e.g., via a portal) that the original vendor’s branding was replaced with that of the company or ISV licensing their technology. Embedded BI in this sense has a long and not-so-obvious history. One of the earliest embedded BI tools was Crystal Reports, which the former Business Objects (now SAP) acquired almost 20 years ago.

Starting in the late-1990s, software vendors “embedded” Crystal in their products to power built-in reporting capabilities. This model saw Crystal distributed with backup-and restore, systems management, app-dev software, etc.; it resulted in wide uptake among small ISVs, too. Many companies licensed Crystal to power customer-facing services, such as web portals.[1]

But this is not the embedded BI we are looking for. Concomitant with this usage, the term “embedded BI” came to take on a somewhat different meaning: namely, that of BI and analytic features and functions that – rather than being exposed via traditional front-end tools – are “embedded” in operational applications and services: i.e., the software that powers core business and IT processes. In the background, under the covers, these “embedded” BI and analytic functions get invoked by the workflows that knit together essential business processes.

Take, for example, the data quality routines that validate the names, addresses, and telephone numbers of new customers. Or the sales workflow that determines whether or not a supplier decides to offer incentives to a customer – and, if so, which ones. In the background, this workflow traverses a network of internal and third-party systems – for example, calling a third-party service to perform a credit check and then calling the BI platform to make a decision based on the results. In practice, the workflow might have to accommodate one or more exceptions, too: for example, what happens if the product is out of stock? The workflow could trigger analytic processing to identify a next-best option; to decide whether or not to offer the customer a discount; or to identify some other incentive. None of these is explicitly a “BI” or “analytic” scenario; each is unimaginable without BI and analytics functionality, however.

One distinctive aspect of embedded BI in both senses of the term is that customers are usually not aware that they’re interacting with BI and analytics. For the most part, the embedded software runs in the background and provides a definite set of BI and analytic functions; so far as the line-of-business user is concerned, she is interacting with (e.g.) an HR, finance, sales, etc. application. The BI-analytic functions are a non-obvious part of the user experience.

Back in the day, embedded BI thrived. But the old-school embedded BI model was neither designed for nor suitable for use with the cloud. For example, adopting an old-school embedded BI product almost always meant adopting a vendor-specific software-development kit, which sometimes also meant using a proprietary software framework. In addition, it meant adopting a supported middleware technology, such as a specific web or application server, and, not least, using a supported app-dev environment (i.e., an IDE). Just as important, it meant maintaining these assets: managing software dependencies or deprecations, patching security issues, etc.

In the cloud, by contrast, the conditions for success with embedded BI have never been better.

“In sort of the traditional [embedded BI] systems, too, it wasn’t just about finding the right software tools and components, but also figuring out how to scale them the right way, how to make sure that it’s going to handle the types of load, whether it be users or data, how to secure them the right way and manage all of the interconnected workings of everything that’s going on,” Abramson told analyst Eric Kavanagh during a recent Inside Analysis podcast. In the cloud, he countered, “all of that is sort of taken care of for you, in the sense [that] all of these different services auto-scale, [so it is] very easy to work with, very easy to grow and sort of manage, even in microservices and … serverless architectural models. All of the security is … comprehensive and [it is] easy to integrate and manage and all of that.”

Exploiting available cloud services enables Qrvey to focus on core BI and analytics

Qrvey hosts its PaaS service in the context of Amazon Web Services (AWS); in this way, Qrvey exploit’s AWS services to provide ETL, SQL query, ML processing, function-as-a-service (FaaS, or “serverless”) computing, and other integral components of its PaaS BI and analytics offering.

This makes sense. AWS is a modular stack. It is accretive in the sense that many AWS services lead to other AWS services. So, for example, use of Redshift, AWS’ massively parallel processing database service, leads to use of other AWS services, including S3, AWS’ object storage service, or Athena, AWS’ SQL-query-as-a-service offering. Similarly, AWS EMR (compute-as-a-service); AWS Glue (ETL-as-a-service); AWS QuickSight (BI-as-a-service); and AWS SageMaker (ML-as-a-service) each address a set of specific, but also complementary, use cases. So, not only does each AWS service provide a set of potentially useful functions, but each is modular in the sense that it complements one or more other AWS services. It is modular in the special sense that it is theoretically replaceable, too. The set of functions that each of Amazon’s AWS services encapsulates is generalizable in the sense that Microsoft, with its Azure platform, or Google, with its Google Cloud platform – or Alibaba, Oracle, IBM, and others – have usually developed and implemented similar or analogous services.

“It’s almost like building products with new parts that have come out… now you have these new parts that are being offered by the manufacturer if you will, namely AWS, and you can piece those together to create the functionality that you want,” observed Kavanagh, who hosts the Inside Analysis podcast.

Abramson agreed. “It really dawned on us that architecture was kind of the key, and by ‘architecture,’ we really … focused on this concept of cloud-native architecture, and by cloud native it means really just taking advantage of all of the breadth of functionality and capabilities that exists in the cloud, in our case in the AWS ecosystem,” he said. “We’re not looking to reinvent the wheel when it comes to … these very rich, powerful services. If we can find them already available in the cloud and incorporate them to give our users and our customers value, we’ll go ahead and take advantage of that.”

Cloud services are best-in-class services

This confers another advantage, too. As Abramson said, most “generic” cloud services are anything but generic: instead, Amazon, Google, and Microsoft develop them as best-in-class as-a-service products.

For the (large) teams that support and maintain them, enhancing these ETL-, BI-, and ML-, etc.-as-a-service products is a core competency. The upshot is that a “generic” ETL or SQL query product hosted in AWS, Azure, or GCP exposes one or more rich access interfaces, provides a comprehensive set of functions, and (for these and other reasons) should prove suitable for a wide range of potential use cases, analytic and otherwise. Exploiting available services to support generic use cases or to provide a rich set of generalizable functions is good for PaaS vendors and their customers. As a PaaS “David” that has neither the financial nor the human capital of Amazon and other “Goliaths,” Qrvey cannot afford to spread its finite resources too thinly. Its architects and developers most focus their available time, effort, and talents on improving the features and functions of the core Qrvey platform.

For the same reasons, Qrvey does not have the resources to service a large amount of technical debt; rather, it must take on as little technical debt as possible. In this respect, Qrvey’s modular architecture gives it a sustainable foundation for maintaining, enhancing, and scaling its services, as well as for introducing new core BI services, exploiting new generic AWS services, and otherwise accommodating different kinds of economic, political, or technological change. As Kavanagh noted, this architecture also makes it possible for Qrvey to expand its stack to other platforms, such as Azure or GCP.

“What’s cool about your architecture … is that everything is already in the cloud and you’re just basically snapping a little … analytical widget into the workflow and, bam, you’ve got your analytics,” he pointed out, referring to Qrvey’s use of AWS services to provide ETL, SQL query, and other functions.

Cloud-native design aims to produce resilient, change-friendly software architecture

“The traditional tools … are definitely not optimized for the embedded [cloud] use case,” Abramson argued. “Oftentimes … they’re running on the old sort of client-server-based model which doesn’t necessarily even work in a cloud scenario.” This is slightly misleading. A decade ago, BI and analytics vendors basically forklifted their on-premises systems, middleware, applications, suites, etc. to run in the cloud context. (The early versions of Tableau Cloud and Teradata Cloud are exemplars of this.)

At this late date, however, virtually all BI and analytics PaaS offerings are – to varying degrees –designed with the pluses, minuses, and vicissitudes of the cloud context in mind. Obvious cloud pluses include both elasticity – i.e., the ability to scale capacity up or down – and on-demand capacity; obvious cloud minuses include, first, the need to (re)design software to permit elastic scaling and, second, the need to anticipate and mitigate the impact of virtualization on performance and availability.

Less common, however, are microservices- or FaaS-based takes on cloud-native software design.

For example, customers can use Qrvey to build the equivalent of composite BI and analytic applications: i.e., workflows that knit together and orchestrate discrete services, some developed by Qrvey itself, some by Amazon. They can likewise embed specific Qrvey or AWS capabilities in the workflows that power their own business and IT processes – just by calling the relevant services.

This model is consistent with microservices-like design principles, even if Qrvey’s PaaS is not a formal implementation of microservices architecture. “To be honest a lot of what we’re doing today … is the migration towards more of a microservices and serverless … approach,” Abramson told Kavanagh, explaining that “all of these solutions can now be accessed only when they’re needed and you can basically plug and play all of the different components and capabilities that you might need.”

This is why Abramsom believes paradigms such as microservices or FaaS are the way forward for software architecture in the cloud. Qrvey focuses on what it does best – i.e., maintaining and enhancing its BI-analytics feature set and functions – while Amazon focuses on what it does best: building, maintaining, and enhancing generalizable AWS services. “That microservices approach is sort of exactly why migrating towards that model gives you that flexibility so that you can enhance certain features or certain functions without impacting other features or … functions,” Abramson concluded.

[1] Other commercial embedded BI players were the former Brio Software (acquired by the former Hyperion, itself acquired by Oracle) and Information Builders, which is still independent. In the open source space, the former Pentaho and Jaspersoft enjoyed huge success in the embedded market.

About Vitaly Chernobyl

Vitaly Chernobyl is a technologist with more than 40 years of experience. Born in Moscow in 1969 to Ukrainian academics, Chernobyl solved his first differential equation when he was 7. By the early-1990s, Chernobyl, then 20, along with his oldest brother, Semyon, had settled in New Rochelle, NY. During this period, he authored a series of now-classic Usenet threads that explored the design of Intel’s then-new i860 RISC microprocessor. In addition to dozens of technical papers, he is the co-author, with Pavel Chichikov, of Eleven Ecstatic Discourses: On Programming Intel’s Revolutionary i860.