Creating your first automation

Estimated time: 15 minutes

Create an account

You can create a free account at TunnelHub.io. Just inform your name and email and accept the terms and conditions. No credit card is required for free accounts.

Installing TunnelHub CLI

To help during the development process, we created a CLI package to interact with our platform in a simple and productive way. To install, you must have NodeJS v12+ and NPM installed.

  • npm install -g @tunnelhub/cli

If you are using Yarn, run:

  • yarn global add @tunnelhub/cli

Login CLI Client

To run deploys, list existing automation or create new resources, you must be authenticated with our platform. This can be done with the th login command. When running, you will be prompted for three pieces of information:

  • Tenant ID

  • User

  • Password

The Company ID can be in the information button on the upper right bar of the system, in the Company ID field. The username and password fields are the same as those used to enter the portal.

Now you are ready to use all CLI commands in your TunnelHub account. You can execute th login-check anytime to verify if your credentials are valid.

Create a package

Before creating automation, it's necessary to create a package. Packages are logical units for grouping items together within the TunnelHub platform.

You can create your package in the user interface or using CLI. To create a package in the DEV environment using CLI, execute:

th create-package --env DEV

Create an automation

Now it's time to create an automation. You can do it in the user interface or using CLI. To create automation in the DEV environment using CLI, you should use the command

  • th create-automation --env DEV

This command creates automation in TunnelHub and also generates an initial skeleton for your application based on four models:

  • No delta (individual)

  • No delta (in batch)

  • With delta (individual)

  • With delta (in batch)

According to the chosen model, a different template will be created with all the code necessary to start your automation. In most cases, the template "No delta (individual)" will be a good choice. So let's do it:

All necessary code will be created in a new folder with the name chosen for your automation. You can open it using your favorite IDE, like VS Code, with cd My-fist-automation && code .

If you check the user interface on the web app, the automation created was visible there too. Now you are ready to start coding your automation.

Coding

First of all, we need to install all dependencies. You can do it by executing yarn install on the root. With dependencies installed, our integration code is in src/classes/integration.ts. This class has three important methods:

  • loadSourceSystemData

  • defineMetadata

  • sendData

Method loadSourceSystemData

This method is responsible to collect the data from the datasource. It's a async method and must return an array of objects. Let's check an example:

async loadSourceSystemData(payload?: any): Promise<CovidCases[]> {
    const sourceSystemData: CovidCases[] = [];
    const gotResponse = await got(`https://covid-api.mmediagroup.fr/v1/cases?ab=BR`);

    const covidCases = JSON.parse(gotResponse.body);
    for (const state in covidCases) {
      if (state === 'All') {
        continue;
      }
      if (covidCases.hasOwnProperty(state)) {
        sourceSystemData.push({
          stateName: state,
          confirmed: covidCases[state].confirmed,
          recovered: covidCases[state].recovered,
          deaths: covidCases[state].deaths,
          updated: covidCases[state].updated,
        });
      }
    }

    return sourceSystemData;
  }

In the example, is requested data from an API using got HTTP client and returning and an array of objects with five columns.

Method defineMetadata

This method is responsible for translating data returned into loadSourceSystemData for the monitoring screen as human-readable. It's a sync method and must return an array of objects with column definition. Let's check an example:

defineMetadata(): Metadata[] {
  return [
    {
      fieldName: 'stateName',
      fieldLabel: 'State name',
      fieldType: 'TEXT',
    },
    {
      fieldName: 'confirmed',
      fieldLabel: 'Confirmed cases',
      fieldType: 'NUMBER',
    },
    {
      fieldName: 'recovered',
      fieldLabel: 'Recovered cases',
      fieldType: 'NUMBER',
    },
    {
      fieldName: 'deaths',
      fieldLabel: 'Deaths',
      fieldType: 'NUMBER',
    },
    {
      fieldName: 'updated',
      fieldLabel: 'Last updated at',
      fieldType: 'DATETIME',
    },
  ];
}

The fields can are described below:

  • fieldName: the technical name of the field returned in loadSourceSystemData

  • fieldLabel: the human-friendly name of the field to be displayed in the monitoring

  • fieldType: the type of field for formatting. The possible values are 'TEXT' | 'NUMBER' | 'DATE' | 'DATETIME' | 'BOOLEAN'. The monitoring screen will apply formatting automatically according to the field type.

  • hideInTable: this field is not present because it is optional and can be used to hide the field in the log table but show it on the detail screen.

Method sendData

This method is responsible to send the data to your target. In our example, we are using the "No delta (individual)" template, so this async method will executed for each array item returned in loadSourceSystemData and must return an object defining the status of processing. Let's check an example:

async sendData(item: CovidCases): Promise<IntegrationMessageReturn> {
    const props = Object.keys(item);
    const opts = { fields: props };

    const parser = new Parser(opts);

    const client = new ftp.Client();
    await client.access({
      host: 'your.ftp.host.com',
      port: 21,
      user: 'yourUser',
      password: 'yourPassword',
      secure: false,
    });

    const readable = Readable.from([parser.parse(item)]);
    await client.uploadFrom(readable, `/destinationFolder/covidcases/${Math.floor(Date.now() / 1000)}/extractedData.csv`);
    client.close();

    return {
      data: {},
      message: 'Success',
    };
  }
}

In this example, the automation will create a file in an FTP for each object returned in loadSourceSystemData. After creating the file successfully, it is necessary to return an object with some message (in this case, "Success") to be displayed on the monitoring screen - but it can be any other text. Exceptions are caught by default and aren't necessary any special handle.

Testing locally

We know it's a lot of work testing cloud applications because each deploy take time and there's no time to lose. So it's so better have a way to execute and simulate the automation in our local environment. For that, we have setup basic test cases using jest in __tests__ folder. Let's take a quick look:

import { mocked } from 'ts-jest/utils';
import * as main from '../src';
import * as ftp from 'basic-ftp';
import { NoDeltaIntegrationFlow } from '@tunnelhub/sdk/src/classes/flows/noDeltaIntegrationFlow';
import got from 'got';
import { AutomationExecution } from '@tunnelhub/sdk';
import AutomationLog from '@tunnelhub/sdk/src/classes/logs/automationLog';

jest.mock('basic-ftp');
jest.mock('got');

const mockedFtp = mocked(ftp, true);
const mockedGot = mocked(got, true);

beforeAll(() => {

  /**
   * The code below is ** mandatory ** to avoid TunnelHub SDK making external calls trying persist logs
   * You can make this mock using the same code with any IntegrationFlow at @4success/tunnelhub-sdk/classes/flows
   */
  const persistLambdaContextFunc = jest.spyOn(AutomationExecution as any, 'persistLambdaContext');
  persistLambdaContextFunc.mockImplementation(() => {
  });

  const persistLogsFunc = jest.spyOn(AutomationLog.prototype as any, 'save');
  persistLogsFunc.mockImplementation(() => {
  });


  const updateExecutionStatisticsFunc = jest.spyOn(NoDeltaIntegrationFlow.prototype as any, 'updateExecutionStatistics');
  updateExecutionStatisticsFunc.mockImplementation(() => {
  });

  const updateMetadata = jest.spyOn(NoDeltaIntegrationFlow.prototype as any, 'updateMetadata');
  updateMetadata.mockImplementation(() => {
  });
});


test('testMyIntegration', async () => {
  /***
   * Mocking basic-ftp class and got
   */
  //@ts-ignore - is not necessary return all methods
  mockedFtp.Client.mockImplementation(() => {
    return {
      access(options) {
        //@ts-ignore
        return new Promise(resolve => resolve({}));
      },
      uploadFrom(source, toRemotePath, options) {
        return new Promise(resolve => {
          //@ts-ignore
          resolve({});
        });
      },
      close() {
      },
    };
  });

  //@ts-ignore - is not necessary return all methods
  mockedGot.mockReturnValue({ body: JSON.stringify(require('./data/covidCases.json')) });

  /**
   * Calling my function
   */
  const response = await main.handler({}, {});

  expect(response.statusCode).toEqual(200);
  expect(typeof response.body).toBe('string');

  expect(response.body).toEqual('Automation executed with no errors!');
});

If you are already familiar with jest, there's nothing new. But we have some important settings for local executions in beforeAll section:

beforeAll(() => {

  /**
   * The code below is ** mandatory ** to avoid TunnelHub SDK making external calls trying persist logs
   * You can make this mock using the same code with any IntegrationFlow at @4success/tunnelhub-sdk/classes/flows
   */
  const persistLambdaContextFunc = jest.spyOn(AutomationExecution as any, 'persistLambdaContext');
  persistLambdaContextFunc.mockImplementation(() => {
  });

  const persistLogsFunc = jest.spyOn(AutomationLog.prototype as any, 'save');
  persistLogsFunc.mockImplementation(() => {
  });


  const updateExecutionStatisticsFunc = jest.spyOn(NoDeltaIntegrationFlow.prototype as any, 'updateExecutionStatistics');
  updateExecutionStatisticsFunc.mockImplementation(() => {
  });

  const updateMetadata = jest.spyOn(NoDeltaIntegrationFlow.prototype as any, 'updateMetadata');
  updateMetadata.mockImplementation(() => {
  });
});

These sections are mandatory for any test because they are routines that only make sense in real runtime. They are to save logs, update metadata, persist some contexts between internal lambdas and update some statistics, like execution time and error count. Any other mocks are optional and are up to you. It's very common for debugging mock anything more to make real calls and evaluate the results locally.

Deploying

When you are good and confident about your integration, let's deploy it and run it on the cloud. Considering that in our example, we are using an FTP as target data, it will be necessary to have a working FTP and adjust your code to connect to your FTP server.

As your code will run in the AWS Lambda environment and we are using Typescript, it's necessary to create a bundle with all code and dependencies transpiled to Javascript. In our template, it's already configured using Webpack. Just execute:

  • yarn run build && th deploy-automation --env DEV --message "My first deploy"

This will create the bundle and deploy your artifacts for your TunnelHub account. You can check your deploy details in Automation -> Automation Details -> Deployments.

Executing

After you successfully deploy, it's time to execute your integration on Cloud Environment. This can be archived in many ways, including:

  • Creating a schedule

  • Create a webhook and call it manually using Postman or other HTTP clients

  • Execute manually through the user interface

To execute is necessary to have a trigger defined before. Let's edit our automation and define an "On event / Webhook" trigger type and save it:

Now let's execute it manually. Go to the Automations menu, find your automation, and press Operations button in Actions column and select Execute now:

On the modal, just press Execute without a payload:

After that, your integration will start executing! You can check the progress in the menu Automations -> Monitoring

After finished, you can check de detailed log by pressing the See details button on the last column:

The log will cover line by line, with column titles defined in metadata output. You can check record details by clicking on the first column Log ID:

Configuring alerts

In most cases, someone will need to receive an alert if any automation fails. This can be set up easily through a panel on automation details, in the section Notifications:

To add someone as receives, just click on "+ Add" button and fill all information required:

That's it! The platform will send an e-mail message in the selected language warning that execution has errors. For details, it's necessary to have a user on the platform with all necessary permissions.

Last updated