We really need to stop thinking of memory as a 'store'. Treat it as a cache and (a) GC naturally becomes a background process, and (b) you get persistent memory for free.
When considered as 'store', the value image is at a specific memory location, i.e. a specific offset in process heap. GC's difficult role here is to manage these allocations and compact the heap, etc.
When considered as a 'cache', the entire heap is a fixed sized cache. (LRU or better 2Q to deal with ephemeral objects). The actual memory object is at an offset in a mapped segment. (If mapped to a file, then voila, persistent memory).
As object are created, they are immediately placed in the 'process cache'. So to take the example of the OP's 'transactional' memobjs, these are cache entries that get promoted to the LRU section of a 2Q, get utilized, and then are naturally 'garbage collected' by eviction.
This scheme is not universal. Very long lived processes that sporadically access 'ancient' objects will stress the VM/FS. One can extend the temporal range of this scheme by employing a set of rotating FS-backed memory segments, but of course there will be a limit (e.g. your FS).
Does that clarify the scheme?
[p.s. of course one can used a leveled cache hierarchy, with optimal realization mapping to L1, L2, wired-memory. But a basic version can just use a single level 2Q cache.]
My late friend Alain Fournier once told me that he
considered the lowest form of academic work to be
taxonomy. And you know what? Type hierarchies are
just taxonomy.
Possibly Mr. Fournier meant 'foundational' by "lowest". If not, he may wish to revisit the foundational work performed by "lowly" academics such as a Mr. Darwin in the 19th century.
[p.s. we need to alert the CPU designers that they have been cluelessly using caches all these years. If only they had higher guidance from an infallible mind..]