Inside Analysis

What are the Building Blocks of a Modern Data Pipeline?

The problem with traditional data pipelines based on extract, transform, and load (ETL) tools that populate data warehouses and data marts is that power users quickly bump up against their dimensional boundaries. To answer urgent questions, they are forced to download data from data warehouses into Excel or another desktop tool and combine with data acquired elsewhere. The result is a suboptimal spreadmart.

Today, organizations build modern data pipelines to support a variety of use cases. Besides data warehouses, modern data pipelines generate data marts, data science sandboxes, data extracts, data science applications, and various operational systems. These pipelines often support both analytical and operational applications, structured and unstructured data, and batch and real time ingestion and delivery. This webcast will debate the construction and components of a modern data pipeline.

You Will Learn:
-The key components of a modern data pipeline.
-How a modern data pipeline differs from traditional data flows.
-How organizations create, optimize, and manage data pipelines to multiple use cases.
-Others services and processes are required to manage modern data pipelines.
-How to ensure standardization in a use-case-driven approach to delivering data sets.

REGISTER

Download presentation pdf

WEBCAST

Leave a Reply

Your email address will not be published. Required fields are marked *