Database scheme

I’ve got a requirement to integrate an external custom reporting tool with Corteza to be able to process data using SQL. I understand that it’s probably a special requirement and Corteza is supposed to be integrated via the REST API.

I’ve looked at the current DB scheme and learned that it uses EAV modelling, which is not that much fitting to using direct SQL. Is there any possibility to use a “standard” relational db model somehow or synchronise the current EAV model with a relational model?

Out of the box no, so you’d have to do the syncing on your own.
You could probably get it sorted with automation.

Syncing somewhere outside via REST API

If you would have that other DB accessible via REST API you can use workflows.
Have a before update/create trigger to send the data over to the other DB and call it a day.

Syncing into the same DB Corteza is using

Then I think it could be possible with automation scripts.

You wouldn’t be able to do it purely with workflows as they don’t give you quite as much freedom as automation scripts.
You can, however, mix-and-match the two.

You could get some NPM package to connect & execute arbitrary SQL on your DB.
just make sure you handle SQL injections and all those pesky bits and pieces.

You would then have a before update/create trigger to build up a create/update statement, run it over the new tables, and call it a day.

I wouldn’t recommend a workaround for such a thing

cause you will have two sources of truth for the data
also, you might consider the consistency here (if store the data before create/update somewhere and the operation for some reason failed you have to retry till it succeeds or goes back and delete the stored data)

depending on your capability of the development effort and business usecases, I would suggest considering building an extension to the source code directly

I would not consider this a deal-breaker especially if there is no other solution.
As long as you make sure to properly keep your data synced up you should be fine.

Instead of assuring synchronization before create/update/delete you could also do it in an interval (let’s say 1h for the sake of the argument) where you would fetch changes from 1h ago and assure that things are synced up.

This would also mitigate your concern of writing 1 passing but writing 2 fails.
Data would be a bit out of sync, but for some use-cases that is ok.

This could also imply the same issues as the above-proposed solution – a write to the standard record store could pass, but a write to the newly created DB/table could fail; what then?

1 Like