Materialising de-normalised views from a relational dataset

Rahul Pradeep
4 min readJun 22, 2021

As the name suggests, Relation Databases are used to store entities and the relationships between them in a normalised manner. Addition to this, they also provide transactional support and indexing. There are many use cases in the industry where we would want to use the same set of relational data in multiple systems. Each of these system can be thought of as a web service responding to a particular kind of user request. Each of this request can be formulated as a query to the dataset. Let’s look at various ways of solving this problem.

Stateless Service backed by a DB

Every request to this service is re-formulated into a database query (SQL query) and these queries are executed remotely in the database. The service thread waits for the query to be completed in the database. Once the database is done processing the query, it responds back to the service with the ResultSet. This result set is then transformed to a response and sent back to the client.

Database queries can be expensive, especially, if it involves processing a large amount of data using operations like joins, unions, groupBy, etc. This can negatively impact the throughout and the latency of the system. One easy way to scale our service would be to cache data in-memory which are frequently accessed. This will improve the throughput of the system. For cache-misses, requests will still take a long time to complete. To improve latency, we might want to store all the data from the relational database into a low latency data store.

Storing Materialised Views

Every service will be interested in a subset of entities that exists in our database. Each of these entities could be dependent on multiple other entities. So, we need to make sure that when we cache the data, we cache a de-normalised view of the entity. This will enable us to reduce the number of calls made from the service to our low latency data store. This de-normalised view is also called a materialised view.

Change Data Capture (CDC)

Relational databases like MySQL, PostgresDB, etc has binary logging. Every row changes are logged in a file, commonly known as binlog. To materialise views, we would need to listen to these log events and process it in real time.

In the above design, we have a simple Bin Log Processor, which is responsible for doing two things

  • One thread, say the I/O thread, which continuously polls for new events in the binlog. Every new event is handed over to the processing thread.
  • Processing thread, materialises views and writes them to the sink. Every view in the sink corresponds to a row in the sink. Writes to sink are always atomic for a single row.

Please note that our Bin Log Processor is stateless.

Computing Materialised View

In the above example, the materialised view View is dependent only on one entity Entity_1. One entity object of View depends on one row of Entity_1 table. To compute View using our CDC design, we would need the I/O thread to hand over every row read/modify/update event to the processing thread. The processing thread then created the View in-memory and persist it in the sink. It can be reasoned that system will remain in consistent state in any kind of failure scenarios. Let’s look at some of the failure scenarios in detail to understand this better.

  • I/O thread receives event but hand-over to processing thread failed. The sink will have a stale version (version 0) of our View entity. The view of the entity is delayed with respect to the DB but not wrong.
  • Processing thread fails processing mid way. Since write to sink are always atomic for a single row, this failure can never leave the view entity in a wrong state. View version will be either 0 or 1.

Let’s complicate things a bit now !

In the above example, View update is complete only when both Entity_1 and Entity_2 are updated. A single transaction updates both the entities together.

Since our Bin Log Processor is stateless, single row event processing wouldn’t work in this case. We have to make Bin Log Processor transaction aware. We can do this by listening to two more events apart from the row events — START_TRANSACTION (TX_ID) event and COMMIT_TRANSACTION (TX_ID) event. When we encounter the former, we create a transaction event buffer for the TX_ID and start buffering all the events until we see a COMMIT_TRANSACTION (TX_ID) event. Once the processor sees this event, it can now start materialising the View entity.

The transaction event buffer is external to Bin Log Processor to keep the processing component stateless which helps in fault tolerance scenarios.

--

--