Clickhouse backups and storage used

I self host Believable Analytics in my private Kubernetes cluster, and that makes use of Clickhouse as datastore. Two questions:

1) Clickhouse out of the field makes use of a ton of storage for logs, so I modified some config as defined in to calm logs down.

It nonetheless makes use of extra storage than wanted. For instance it was utilizing 4GB already for metrics spanning a short while. By operating the command prompt within the article, the storage went down to simply 20M, which is cheap for the tiny quantity of information it has collected.
Does anybody know a approach to keep away from doing this? For the time being I’m operating that command periodically.

2) What’s one of the simplest ways to again up a Clickhouse database? I discovered this – nevertheless it would not assist the desk format utilized by one in every of Believable’s tables. So in the meanwhile I’m utilizing Velero (since I am in K8s) to do the backup of the filesystem, and freeze the filesystem in the course of the backup to make sure the backup will not be saved with inconsistent/corrupted information. Is there something higher? I might put together one thing like the conventional dumps we do for Postgres and MySQL.

Thanks

🔥 Hot and trending web hostings deals 🔥

HostingsCoupons.com - Web Hostings Coupons, Sales, Deals and Discounts
Logo