MYSQL/POSTGRES Issues

I have really struggled with a few things, and today my woes continue. The importer on both Linode, and Digital Ocean with Multi-image MySql all fail to import my 5k or so records.

I tried it with Postgres on Amazon AWS, and I didnt’ get the errors. However the Postgres image was not setup persistant out of the box, and each time I would shut down the database it would not save my database.

On a virtual machine, with plenty of resources, the MySQL multi-image install will not handle more than ~3000 imports.

I can switch back to the Postgres database, but it doesn’t save data. I have AWS snapshots of each step in the process. I can successfully load the data and then dump the database to a backup, but I wasn’t able to restore it correctly when I attempted to do so.

I reached out to the Crust team for lowcode setup, but haven’t even been able to get their workflows to work. Can someone check the persistence settings on the Postgres, because that one seems to work better for my needs if I could simply keep it working after a docker-compose down.

So TLDR:
I used the multi-image install settings on AWS, Linode, and Digital Ocean. Imports fail after several records, eventually rending the import tool useless at some point between 3-5k rows.

Postgres didn’t’ have the issue, but didn’t save my data to the virtual disk.
I used the yaml and env settings verbatim from this page and there is no data in the data/db folder.
https://docs.cortezaproject.org/corteza-docs/2021.3-draft/devops-guide/online-deployment/multi-pgsql.html

To prevent someone from doing a bunch of work on a non-persistent database, For the Postgres installation, perhaps the documentation should add the volume option:

version: ‘3.5’

services:
server:
image: cortezaproject/corteza-server:${VERSION}
networks: [ proxy, internal ]
restart: on-failure
env_file: [ .env ]
depends_on: [ db ]
volumes: [ “./data/server:/data” ]
environment:
# VIRTUAL_HOST helps NginX proxy route traffic for specific virtual host to
# this container
# This value is also picked up by initial boot auto-configuration procedure
# If this is changed, make sure you change settings accordingly
VIRTUAL_HOST: ${DOMAIN}
# This is needed only if you are using NginX Lets-Encrypt companion
# (see docs.cortezaproject.org for details)
LETSENCRYPT_HOST: ${DOMAIN}

db:
# PostgreSQL Database
# See Docker Hub for details
image: postgres:13
networks: [ internal ]
volumes: [ “./data/db:/var/lib/postgresql/data” ]
restart: on-failure
healthcheck: { test: [“CMD-SHELL”, “pg_isready -U corteza”], interval: 10s, timeout: 5s, retries: 5 }
environment:
# Warning: these are values that are only used on 1st start
# if you want to change it later, you need to do that
# manually inside db container
POSTGRES_USER: corteza
POSTGRES_PASSWORD: corteza

networks:
internal: {}
proxy: { external: true }

Pulling out a few hairs…
Loaded the yaml files above, once again created my namespaces, added my fields, and imported my accounts, when I went to import contacts I get:
db_1 | 2021-09-16 01:59:55.567 UTC [110] ERROR: current transaction is aborted, commands ignored until end of transaction block
db_1 | 2021-09-16 01:59:55.567 UTC [110] STATEMENT: INSERT INTO compose_record (created_at,created_by,deleted_at,deleted_by,id,module_id,owned_by,rel_namespace,updated_at,updated_by) VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)
ERROR: current transaction is aborted

I gave up for the time being, I was able to get my 5000+ record import to work running local host on my Mac. Then I was able to import that database into both a cloud server and a local pizza box at the office. Used the mySQL image as I’m more familiar with those commands.

I believe the mysql issue with the timeout is some sort of processing the app is doing against current records. Which is why it slowly starts to timeout, for example it will import 1000 records no sweat, than 800, than 500, than eventually right around the 3,100 mark, it won’t import anything.

It might be worth checking out if there was a workflow called upon inserting a record. If you import thousands of records, and call the workflow thousands of times, that might give issues with importing. I’ve imported myself tens of thousands of records before without issues, but, workflows were off for those records.

I have been trying to set up with persistant data as in the .yaml file above with
volumes: [ “./data/db:/var/lib/postgresql/data” ]
and when I try to start I got the following error:

ERROR: Named volume ““./data/db:/var/lib/postgresql/data”:rw” is used in service “db” but no declaration was found in the volumes section.

My docker-compose.yaml

version: ‘3.5’

services:
server:
image: cortezaproject/corteza:${VERSION}
networks: [ proxy, internal ]
ports: [ “192.168.0.125:1030:80” ]
restart: always
env_file: [ .env ]
depends_on: [ db ]
volumes: [ “./data/db:/var/lib/postgresql/data” ]
environment:
# VIRTUAL_HOST helps NginX proxy route traffic for specific virtual host to
# this container
# This value is also picked up by initial boot auto-configuration procedure
# If this is changed, make sure you change settings accordingly
VIRTUAL_HOST: ${DOMAIN}
# This is needed only if you are using NginX Lets-Encrypt companion
# (see docs.cortezaproject.org for details)
# LETSENCRYPT_HOST: ${DOMAIN}

db:
# PostgreSQL Database
# See Docker Hub for details
image: postgres:13
networks: [ internal ]
volumes: [ “./data/db:/var/lib/postgresql/data” ]
ports:
- 5432:5432
restart: always
healthcheck: { test: [“CMD-SHELL”, “pg_isready -U corteza”], interval: 10s, timeout: 5s, retries: 5 }
environment:
# Warning: these are values that are only used on 1st start
# if you want to change it later, you need to do that
# manually inside db container
POSTGRES_USER: corteza
POSTGRES_PASSWORD: corteza

networks:
internal: {}
proxy: { external: true }