As far as SQL related shenanigans go we’ve prepared a few snippets here but please do be careful with this especially if such work is new to you. We ususally recommend performing a DB backup before any such larger destructive operation.
Alternativelly, instead of deleting the data, you could set the deleted_at column to NOW() to perform a soft delete. The query changes to an update, but the condition remains the same.
You’ll need to loop through the records and delete them one after another; the workflow would look something like this:
The iterator is for compose records and the function is delete record.
If you have a lot of data to delete and you’re comfortable with it, the direct DB interaction would be by far the easiest.
If not, write a quick workflow like @munawir suggested, or use the bulk delete option by selecting the records you wish to delete.
I want to do it via workflow so I can understand how it works. Thanks for the diagram. What are teh details I need to complete in each block? ie: How do I tell it to delete all records in my Lead module?
As far as the original question goes, this is how I’ve done it for the built-in CRM app we provide…
Make sure to adjust it to your needs and be careful not to delete things you might want to keep.
If you’re new to this a safety DB backup never hurt anyone.
The module handle is found in the module editor screen, next to the module name.
The namespace handle is found in the namespace editor screen, next to the namespace title. Alternativelly, you can locate it in the URL address. For example in http://localhost:18083/compose/ns/crm/admin/modules the namespace handle is crm – the bit after /ns
Open up the configurator for the iterator – make sure that the handles are not expressions (notice a little checkbox thing to switch between an expression and constant)
Hi @BenjaminDover, perhaps I’m shooting in the dark here, but is it possible this is somehow related to the pagination? From the screenshot, I’d assume you increased the number of records to 5000 per page and perhaps that’s too much data to digest at once. Can you decrease this number and see what happens?
I do not recommand to use workflow to delete lot of data. If you have more than 1.000 records on your CRM you will get 501 error and corteza-server should crash.
As suggested it is not recommanded to use workflow, or you can use server-script in workflow it will be more optimized.
I’ll prepare a workflow to demonstrate we can crash corteza easily with too much data and will provide server configuration.
By default I work with HC server too much oversized, then I try to reduce capacity, but in my case it’s crashing at the beginning, with HC configuration.
I would suggest you setup a local instance and test there to see if the issue is with Corteza or your server configs.
I’ve been running some quite heavy-duty workflows not so long ago and they went through without timing out.
There has however been a feature that would cause workflow execution to crash due to high memory consumption.
Workflow execution keeps a stack trace but it is a bit too verbose and coupled with long executing workflows, the memory consumption can be high.
In 2022.3.0 we’ve added a WORKFLOW_STACK_TRACE_ENABLED which would allow you to disable stack traces.
If your instance crashes due to no more memory errors, you have the following options:
Disable the stack trace
Use the above .env variable to disable stack traces; not ideal but is quite acceptable for some cases.
Split up the work
Utilize the messaging queues to have different workflows perform smaller subsets of work.
We don’t have much docs for how to do this, but the TL;DR of it is:
in the admin panes, messaging queues; define a new queue where the consumer is set to event bus
in workflow A, use a function step where the function is “queue message send”
in workflow B, use a trigger with resource of “system queue” and event “on message”.