The cause of the memory leak and poor performance is kind of silly: the decoderbufs is producing a warning on every delete it processes on the materialized view:
elog(WARNING, "no information to decode from DELETE because either no PK is present or REPLICA IDENTITY NOTHING or invalid ");
These warnings are sent to the client (the debezium postgres connector). The JDBC layer carefully preserves them in a list. Since no-one ever clears the list, it only ever gets longer. As there is one warning per row, it gets long pretty quickly! Eventually the entire heap is consumed.
I think this needs to be tackled from both ends:
(1) The decoder plugin should not generate any output at WARNING level or higher on a per-row basis. I've written a patch that drops the priority to DEBUG, which means they aren't sent to clients any more.
(2) The debezium client shouldn't just totally ignore the possibility that warnings (and notifications too for that matter) may be accumulating, it should do something with them.
Last time I looked debezium didn't support materialized views. How then could materialized views cause a problem like in this Jira? The answer is that postgres sends WAL for everything to the decoderbufs plugin, not just for tables debezium is monitoring. Nothing useful is done with WAL from materialized views, but due to various bugs it still managed to cause trouble.