How Flows Process Data in FlowMate

This article explains how FlowMate processes data within a Flow—from the initial trigger to the final action - highlighting internal mechanics like snapshots, queues, and processing behavior.

Templates Define the Flow Structure

Each Flow is based on a Template, which defines:

  • The Trigger: How data is received (polling or webhook).

  • One or more Actions: What should happen with the data (API calls).

  • Optional logic steps or code components to enrich the process.

Trigger: Receiving Data

The Trigger is the entry point of a Flow. It fetches incoming data from a system using one of two modes:

  • Polling: Periodically checks an API for new or updated records. It uses a Snapshot Key (e.g. updatedAt) to only fetch data since the last run.

  • Webhook: Waits for data pushed from an external system in real-time.

How Polling Works:

  • The system fetches all new records since the last snapshot.

  • Once all records are retrieved, the new snapshot is saved.

  • The trigger emits each item as an individual message, not as a batch.

Actions: Processing the Data

Each Action step receives one object at a time and processes it, usually by calling a specific API endpoint.

  • As soon as an action finishes processing an object, it emits the result to the next step.

  • This continues through each step of the Flow until the final action is completed.

  • Options like API parameters or FlowMate settings (e.g. snapshot or sync fields) can be configured per step.

Queue: Processing Order and Queue Behavior

FlowMate uses a Queue to manage the processing of each object:

  • Each connector has its own queue

  • Operates on a First-In-First-Out principle.

  • Each object from the trigger enters the queue and is passed through all flow steps sequentially.

  • This ensures that step 2 is completed for all objects before step 3 begins, if steps 2 and 3 use the same connector.

Timing, Retries & Performance

  • Trigger execution time depends on the data volume and snapshot comparison.

  • Each action waits for a response from the target API. Typical response times range from 100 - 250ms, but may be higher depending on the API/server.

  • Retries are managed per object. For example, if a request fails, it is retried after a delay (e.g. 1700ms), which affects throughput.

  • Estimated processing speed is around 25 - 100 messages per second in real-world conditions.

Last updated