Can workflow fetch data from DB

Hello there

I need a workflow to fetch data from SQL Database directly

what I have in mind is covering my DB behind REST-API and call the API from the workflow
but that’s would be a bit overwhelming since I need to host my API secure the connection and much more

so I wonder if I can write my SQL query in the workflow … is it possible ?

Short answer: no.

Long answer: yes, but…
Not solely using workflows.

You can use automation scripts to write arbitrary code which can do anything from connecting to databases to rendering PDF documents – they’re basically a node.js scripts.

The most straightforward solution, purely done with Corteza, would be to write some automation scripts that connect to the DB directly which you could then execute from a workflow

2 Likes

@tjerman thanks for your input

I wonder if there is an easier way to upload automation script to the platform without any deployment process >> is there any place from the UI to write my script? and store it into DB so when it scale-out to another server everything is there

No, automation scripts require Corredor in order to run.

1 Like

Hey @tjerman, Just wrote the scripts and couldn’t upload them

do I have to include corredor in the docker-compose.yaml file?
I already enabled it from the .env file but when I log into the server I only see provision , webapp, and bin directories

what I understand from the docs, I should find a directory called corredor

please let me know if I misunderstood something … thanks

I’ll write down some proper instructions as soon as I’m able (my hands are a bit tight right now).

The general idea is that you:

  1. setup Corredor & configure server
  2. write automation scripts
  3. upload those scripts to the server and make them available to Corredor via volumes

For the setup part, here is one of my local setups I use to mess around (there are both .env and docker-compose.yaml files) – .env · GitHub
I’ve marked the .env with ####### LOOK HERE ####### for relevant variables.

For the write automation scripts, refer to here.
My setup looks like this:

For the volumes, notice volumes: under corredor and CORREDOR_EXT_SEARCH_PATHS .env variable.

Hope this helps; if not I’ll write some more notes; for your exact usecase if I’ll manage.

1 Like

@tjerman as always, your inputs are a treasure

I see you included the corredor in the docker-compose file … will give it a try and let you know

Yes, I’ve included it in there so it runs alongside other containers.
If you prefer, you could probably run Corredor somewhere completely new, but it’ll probably cause more issues than it’s worth (making sure stuff is accessible).

@tjerman I’ve tried it locally and got Corredor running fine, I’ve got the scripts on corredor scripts page in the admin panel and everything is good so far. will test it tomorrow at AWS servers

I wrote the script to fetch my data from DB, I keep getting this error from Corredor
Error: can not run uninitialized script

and here is the code that cause it
import pg from “pg”;
const credentials = {user: “”,host: “”,database: “”,password: “”,port: 5432};
async function execute_query(query) {
const client = new pg.Client(credentials)
}

any idea what’s the error ?

@tjerman Hi again just uploaded my docker-compose into aws and everything is running as expected

regarding the automation scripts, when I write a simple script like printing some logs into the console the script runs as expected. but when I try to connect to my DB or do any network operations (I’ve tried with axios package) it shows the same error above
Error: can not run uninitialized script
and here are the logs from corredor server

corredor_1 | [1650572354994] DEBUG (grpc.server-scripts): executing script /server-scripts/general.js:default
corredor_1 | rpc: “Exec”
corredor_1 | script: “/server-scripts/general.js:default”
corredor_1 | requestId: “90143062ebc1/zQcCnrIqnr-000054”
corredor_1 | [1650572355010] DEBUG (grpc.server-scripts): can not run uninitialized script
corredor_1 | rpc: “Exec”
corredor_1 | script: “/server-scripts/general.js:default”
corredor_1 | requestId: “90143062ebc1/zQcCnrIqnr-000054”
corredor_1 | stack: “Error: can not run uninitialized script\n at ServerScripts.getExecutable (/corredor/src/services/server-scripts.ts:72:13)\n at Object.Exec (/corredor/src/grpc-handlers/server-scripts.ts:136:11)\n at /corredor/node_modules/grpc/src/server.js:593:13”
corredor_1 | code: 5
corredor_1 | [1650572355013] DEBUG (grpc.server-scripts): can not run uninitialized script
corredor_1 | rpc: “Exec”
corredor_1 | script: “/server-scripts/general.js:default”
corredor_1 | requestId: “90143062ebc1/zQcCnrIqnr-000054”
corredor_1 | stack: “Error: can not run uninitialized script\n at ServerScripts.getExecutable (/corredor/src/services/server-scripts.ts:72:13)\n at ServerScripts.exec (/corredor/src/services/server-scripts.ts:87:12)\n at Object.Exec (/corredor/src/grpc-handlers/server-scripts.ts:142:11)\n at /corredor/node_modules/grpc/src/server.js:593:13”
corredor_1 | code: 2
corredor_1 | This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason:
corredor_1 | Error: startBatch failed
corredor_1 | at handleError (/corredor/node_modules/grpc/src/server.js:72:8)
corredor_1 | at sendUnaryData (/corredor/node_modules/grpc/src/server.js:598:9)
corredor_1 | at Object.HandleException (/corredor/src/grpc-server/errors.ts:69:3)
corredor_1 | at /corredor/src/grpc-handlers/server-scripts.ts:159:13
corredor_1 | at processTicksAndRejections (internal/process/task_queues.js:95:5)

do you think it has something to do with the docker-compose networking!?
since only the server exposing the HTTP port to the outside and accessing the corredor via internal network in the docker-compose. if that could be the case, could you please share with me a docker-compose file ready for production environment! … thanks

Did you include a package.json with the required packages (package.json goes alongside server-scripts folder)?
For me, the error occurred because I forgot to do just that.

My package.json:

{
  "name": "foo",
  "scripts": {},
  "dependencies": {
    "axios": "^0.21.1",
    "mysql": "2.18.1"
  },
  "devDependencies": {}
}

You will also most likely have to manually install them because of a yarn cache issue; see here

Then, for the sake of completion, here is my script that indicates it probably works (I used MySQL)

const mysql = require('mysql')

export default {
  label: "label goes here",
  description: "description goes here",

  triggers ({ before }) {
    return before('create', 'update')
      .for('compose:record')
  },

  // If you don't need the Compose helper, remove it
  async exec ({ $record }, { Compose }) {
    // Code goes here
    console.log('hello')

    try {
      var con = mysql.createConnection({
        host: 'db2',
        port: '3306',
        user: 'dbuser',
        password: 'dbpass',
        database: 'dbname',
        charset: 'utf8mb4_general_ci',
        insecureAuth: true,
      })
  
      con.connect(function(err) {
        if (err) throw err;
        console.log("Connected!");
      });

    } catch (err) {
      console.error(err)
    }

    return $record
  },
}

As far as networking goes, if both services are on the same network (internal here), you should be able to access them just fine (see the db2 host for me).

I did, however, need to do this to make authentication pass.
I haven’t worked with databases in quite a while so I just blindly copy-pasted stuff that looked about right; for your production instance (mine is local) you’ll probably need to be more careful.

@tjerman can I see where Corredor server installs the node_modules directory?

That is standard node stuff – by default it will be next to package.json

If you’re wondering about yarn cache (the link from before) refer to the docs.
The issue is that the container doesn’t have enough space for the cache so it errors out.
If you either set it to ignore the cache or you mount a new volume for the cache directory it should be fine.

@tjerman thankfully, it worked
the issue as you said, the yarn didn’t install the packages due to its issue.
I’m a bit concerned here, about how this gonna scale horizontally, I’m using aws autoscaling so when the load increase a new ec2 instance will be created. if the yarn issue happened I would be stuck!!

I managed to do the HTTP request using axios will see how the DB will go from here

Since node_modules is located next to the actual scripts (and available via the volume), when that folder would replicate, the modules should work out-of-the-box.
Alternatively, as I’ve said before, you could mount another volume for yarn cache.