All posts tagged with streaming data


Heroku Streaming Data Connectors Are Now Generally Available

news , Product Management Director, Heroku Data

This summer, we announced the beta release of our new streaming data connectors between Heroku Postgres and Apache Kafka on Heroku. These connectors make Change Data Capture (CDC) possible on Heroku with minimal effort. Anyone with a Private or Shield Space, as well as a Postgres and an Apache Kafka add-on in that space, can use Streaming Data Connectors today at no additional charge.

Customers use connectors to build streaming data pipelines between Salesforce and external stores like a Snowflake data lake or an AWS Kinesis queue for integration with other data sources. They also refactor monoliths into microservices, implement an event-based architecture, archive data in lower-cost...

Today we are announcing a beta release of our new streaming data connector between Heroku Postgres and Apache Kafka on Heroku. Heroku runs millions of Postgres services and tens of thousands of Apache Kafka services, and we increasingly see developers choosing to start with Apache Kafka as the foundation of their data architecture. But for those who are Postgres-first, it is challenging to adopt without a full app rewrite. Developers want a seamless integration between the two services, and we are delivering it today, at no additional charge, for Heroku Private Spaces and Shield Spaces customers.

Heroku streaming data connectors

Moving beyond Postgres and Kafka, the Heroku Data team sees the use cases for data growing...

Building a SaaS product, a system to handle sensor data from an internet-connected thermostat or car, or an e-commerce store often requires handling a large stream of product usage data, or events. Managing event streams lets you view, in near real-time, how users are interacting with your SaaS app or the products on your e-commerce store; this is interesting because it lets you spot anomalies and get immediate data-driven feedback on new features. While this type of stream visualization is useful to a point, pushing events into a data warehouse lets you ask deeper questions using SQL.

In this post, we’ll show you how to build a system using Apache Kafka on Heroku to manage and visualize...

Beyond Web and Worker: Evolution of the Modern Web App on Heroku

engineering , Director, Developer Advocacy

This is the first in a series of blog posts examining the evolution of web app architecture over the past 10 years. This post examines the forces that have driven the architectural changes and a high-level view of a new architecture. In future posts, we’ll zoom in to details of specific parts of the system.

The standard web application architecture suitable for many organizations has changed drastically in the past 10 years. Back in Heroku’s early days in 2008, a standard web application architecture consisted of a web process type to respond to HTTP requests, a database to persist data, and a worker process type plus Redis to manage a job queue.

Browse the blog archives or subscribe to the full-text feed.