I’ve been running Hanami in production with a custom ROM integration (with ROM SQL, PostgreSQL) without any issues for more than a year now but ever since upgrading to Hanami 2.2, ROM 5.4 and replacing my custom integration with the one built-in I’ve been battling a nasty memory leak without much success. I currently have three pods (each running a Slice): API, RabbitMQ consumer and background jobs (with Sidekiq). The most impacted is the background job pod but it seems mostly due to the much higher workload than the others. Its memory goes from 100MB to 1GB in about 10 hours.
Through the use of the logger and ObjectSpace I’ve managed to narrow it down to the repositories, precisely object space reports a tons of ICLASS objects retained from rom-repository-5.4.2/lib/rom/repository/relation_reader.rb:75
and rom-repository-5.4.2/lib/rom/repository/class_interface.rb:63
. Using the logger I also noticed that the repositories object_id
changes every time it is injected via Deps
while this isn’t true for the logger, the relations or even the container (and its cache) used within each repository.
So far the only thing out of place that I have seen is this line within ROM: rom/repository/lib/rom/repository/relation_reader.rb at d2de00f6249d17aea7965573972633677018f4cf · rom-rb/rom · GitHub which I guess should have been @cache
instead of a regular variable but redefining this method doesn’t solve the issue, it just makes the cache accessible from outside.
I’m a bit lost and running short on time to investigate further, I was wondering if some of you with deeper knowledge of Hanami and ROM had an idea on what could be the cause or at least where I should look at next.