Observability just happens. Spoiler alert not really needed, of course we all know that’s not true. 

Observability only happens when we take a concerted approach to implementing software toolsets that stem from origins in Application Performance Management (APM) and extend all the way through contemporary notions of what it means to perform platform engineering with a view to conquering Infrastructure-as-Code (Iac) and everything in between.

The observability pipeline

Today, observability doesn’t just happen and it doesn’t just happen in one place, which is why we can now talk about the existence of observability pipelines i.e. the passage of observability from first inception though a series of processes designed to enrich, clarify, validate, analyze, visualize and contextualise observability data in relation to real-world (often real-time) data flows across typically hybrid increasingly cloud-native environments.

Aiming to put some much-needed product into this space is Mezmo, an observability data platform provider that has in recent months come forward with its formalized (i.e. branded) Observability Pipeline service. This technology, as it sounds, is designed to provide software application development teams with a route to control, enrich and correlate machine data as it is ‘observed’ to function inside live production enterprise software environments.

But building a pipeline for observability channels isn’t easy – there is no de facto pipe extrusion production plant for us all to go and buy our pipelines and start plumbing them into our application streams. 

So why are observability pipelines so hard to form?

Mezmo reminds us that, essentially, it is because the massive volume, variety and difficult-to-consume nature of machine data generated in modern environments creates immense challenges for DevOps, Site Reliability Engineering (SRE) and security teams, who struggle to control escalating costs and use their data to drive any meaningful action.

The inability to use this data to its fullest increases security risks, negatively impacts customer, user and developer experiences and drains resources.

Centralized flow on show

Mezmo says that its Observability Pipeline helps organizations control their observability data and deliver increased business value by centralizing the flow of data from various sources, adding context to make data more valuable and then routing it to destinations to drive actionability.

“Data provides a competitive advantage, but organizations struggle to extract real value. First-generation observability data pipelines focus primarily on data movement and control, reducing the amount of data collected, but fall short on delivering value. Preprocessing data is a great first step,” said Tucker Callaway, CEO, Mezmo. “We’ve built on that foundation and our success in making log data actionable to create a smart observability data pipeline that enriches and correlates high volumes of data in motion to provide additional context and drive action.”

Mezmo’s Observability Pipeline provides access and control to ensure that the right data is flowing into the right systems in the right format for analysis, minimizing costs and enabling new workflows.

Described as a ‘smart pipeline’ this channel integrates Mezmo’s log analysis features that span search, alerting and visualization capabilities to augment and analyze data in motion.

Enriched workflows

There’s a centralizing theme here and it’s one of more than just a channel for viewing.

This is not observability to look, this is observability with a view to utilizing the information gleaned from dynamic application streams to directly enrich human (and for that matter, machine) workflows from which point an enterprise can streamline, refine and underline its business processes for maximum cost-efficiency, profit and (hopefully) for some higher purpose too. 

In terms of usage, Mezmo notes that users can route data from any source such as cloud platforms, Fluentd, Logstash, Syslog and others, to many destinations for various use cases, including Splunk, S3 and Mezmo’s own Log Analysis platform.

Support for OpenTelemetry further helps simplify the ingestion of data and makes data more actionable with enrichment of the OpenTelemetry attributes. For those that would enjoy a reminder, OpenTelemetry is a collection of tools, APIs and SDKs to instrument, generate, collect and export telemetry data (metrics, logs and traces) to help analyze software performance and behavior.

Mezmo also helps transform sensitive data to meet regulatory and compliance requirements, such as PII. Control features simplify the management of multiple sources and destinations while protecting against runaway data flow.

Like its physical namesake under our cities, the data observability pipeline is a living breathing entity that needs constant maintenance, cleaning through and an occasional power thrust to clear out the stale nasties that might cling on longer than they should – just remember, so flushing those microplastic-enriched wet wipes okay?

About Adrian Bridgwater

Adrian Bridgwater is a freelance journalist and corporate content creation specialist focusing on cross platform software application development as well as all related aspects software engineering, project management and technology as a whole. Adrian is a regular writer and blogger with Computer Weekly and others covering the application development landscape to detail the movers, shakers and start-ups that make the industry the vibrant place that it is. His journalistic creed is to bring forward-thinking, impartial, technology editorial to a professional (and hobbyist) software audience around the world. His mission is to objectively inform, educate and challenge - and through this champion better coding capabilities and ultimately better software engineering.