Confluent, the company founded by the creators of event streaming platform Apache Kafka, is today announcing a “Premium Connector” that integrates Confluent Platform, its enterprise distribution of Kafka, with Oracle Database. This news is about more than a connector, though. It’s about uniting the decades-old world of conventional OLTP (online transaction processing) relational databases with modern big data analytics technology. It’s also about bringing an enterprise software sensibility to that newer-generation technology stack.
Also read:
The mechanics
Specifically, the connector leverages Oracle’s change data capture (CDC) technology, which transmits updates, inserts and deletes from the OLTP database into special tables that can be polled without impacting concurrency on the primary tables being updated. The Confluent connector then brings that CDC information into Kafka (as granularly as one topic per database table) on Confluent Platform clusters, where it can be further propagated into destination data sources using, for example, Confluent’s ksqlDB technology and various data sink connectors, without putting any further load onto the source Oracle database server.
Essentially, this allows Kafka to act as an enterprise service bus (ESB), of sorts. Although the industry may think of Kafka as a streaming data platform, it’s publish/subscribe (pub/sub) message-based architecture actually makes it well-suited to deliver this latter-day service oriented architecture (SOA).
Bridging the generations
ZDNet spoke with Confluent’s Senior Director of Product Management, Diby Malakar, who explained some of the thinking behind the development of the connector, asserting that connectivity is a core pillar of completing the company’s streaming platform. Malakar said that at least 50% of Confluent customers are using Oracle, driving home the point that even if Oracle and Kafka represent two very different worlds, a large number of enterprise customers are citizens of both.
So is Malakar, by the way. Before he came to Confluent, he had served in senior product management roles at StreamSets and SnapLogic, on the one hand, and at Oracle Cloud and Informatica, on the other. He also was a Manager of Data Warehousing and Analytics at BearingPoint/KPMG Consulting, giving him a sense of how things work on the implementation side, in addition to the product side.
Confluent says the Oracle CDC connector is the first in a series of Premium Connectors which, according to the company’s press release, “specifically [target] mission-critical, enterprise systems that are notoriously complex to integrate with Kafka environments.” The over-arching idea is to bring the connectors’ platforms into the world of real-time data analytics and, in general, break down data silos.
Build or buy
With all that in mind, Malakar pointed out that Kafka-Oracle connectivity was always possible to implement. A third party integrator or any customer would have been free to write some code that conveyed the CDC data into a Kafka topic using Kafka APIs. But by introducing a commercially-developed, certified connector to achieve this, Confluent removes the burden from the customer of either acquiring a third-party integration solution, or pursuing in-house development, implementation, management and maintenance, all of which would come with significant personnel costs, consulting costs, procurement/support complexity, or some combination of these.
It’s important to point out explicitly, though, that the Oracle CDC connector is not free, and not licensed to work with generic Kafka clusters. That means customers working with the open source Apache Kafka bits, another vendor’s Kafka distribution, or a cloud provider’s Kafka service (like Amazon Managed Streaming for Apache Kafka (MSK) or a Kafka HDInsight cluster on Microsoft Azure) won’t have access to the connectivity. While such an approach may seem antithetical to the credo of an open source technology like Kafka, Confluent feels that the connector is providing concrete value to the customer and that it thus merits monetization.
Open source an open question?
That point of view is reasonable, and it’s one the enterprise software world has been based on for decades. Certainly, any longtime Oracle customer will appreciate the notion that proprietary software is acceptable and will come at a price, especially if it mitigates other costs that would be incurred were the software not available. Other companies — like MongoDB and Elastic — with open source or “open core” approaches to their platforms, have similarly walked back their licensing to a more proprietary stance.
Also read: Are open source databases dead?
Ultimately, it may come down to this: enterprises like adopting open source data and analytics technologies for the prowess those platforms have as industry standards with robust ecosystem support. But if some unabashedly commercial, proprietary technology and licensing layered atop them is what’s required to give their business models and value propositions enterprise stability and credibility, so be it. When it comes to integrating Oracle with Kafka, such a “Premium” approach may well be be vindicated.