Size of sql dump

I just did a SQL dump of my Corteza instance. I’ve got 34 modules, most of which have < 500 records, one of which has about 7000 records. I’m getting SQL dumps of 60 GB when I do a backup. This seems excessive. How much storage space does a single record take up?

That strongly depends on the number of fields and field types. Also depends if you exported your attachments/files.
But you’re right, it does seem excessive. Have you tried exporting just a namespace and comparing?

That 60GB doesn’t include file attachments.

Also, I noticed something when I was trying to hard-delete some records (to reduce the backup size) - the compose_record_values table doesn’t exist anymore. Only compose_record is there. Has the values table been deprecated?

Afaik yes, but im not the one who knows most about this. @tjerman will know more.
Regarding size, it seems quite large, The actual namespace export (from the UI) should be much smaller. There must be something that is bloating the size of the dump.

If possible I suggest inspecting it and seeing what is the largest part size wise.

To analyze and manage a large PostgreSQL dump file (like your 60GB file), you can dump each database object (tables, views, functions, etc.) separately. This approach allows you to break down the dump into smaller, more manageable files.

I have never used the command to do so as I’m new to PostgreSQL, but try this:
pg_dump -U [username] -h [hostname] -d [database_name] -t [table_name] -F c -f [output_file].dump

Then, you should be able to see which table is causing the dump file to be massive.