Send
Send transforms send data to an external system.
Substation's send transforms differ from other transforms in a couple ways:
- Data Passthrough: All data processed by a send transform passes through, without modification, to the next configured transform.
- Data Batching: All data is batched in memory before being sent to an external system. Each batch can be further processed by applying auxiliary transforms before it is sent.
send.aws.dynamodb.put
Puts JSON objects as items into an AWS DynamoDB table.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.count | int | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | int | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | int | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
aws.arn | string | AWS resource (DynamoDB table) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
Example
sub.transform.send.aws.dynamodb({
aws: { arn: 'arn:aws:dynamodb:us-east-2:123456789012:table/my-table'},
})
sub.tf.send.aws.dynamodb({aws: { arn: 'arn:aws:dynamodb:us-east-2:123456789012:table/my-table'}})
send.aws.eventbridge
Puts JSON data into an AWS EventBridge bus.
Settings
Field | Type | Description | Required |
---|---|---|---|
description | string | Description used when events are put into the EventBridge bus. Defaults to "Substation Transform". | No |
batch.count | int | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | int | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | int | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
aws.arn | string | AWS resource (EventBridge bus) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
Example
sub.transform.send.aws.eventbridge()
sub.tf.send.aws.eventbridge()
send.aws.data_firehose
Puts data into an AWS Kinesis Data Firehose stream.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
aws.aws | string | AWS resource (Data Firehose stream) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
retry.count | integer | Maximum number of times to retry putting data into the Firehose stream. Defaults to the AWS_MAX_ATTEMPTS environment variable. | No |
Example
sub.transform.send.aws.data_firehose({
aws: { arn: 'arn:aws:firehose:us-east-2:123456789012:deliverystream/my-stream'},
})
sub.tf.send.aws.firehose({aws: { arn: 'arn:aws:firehose:us-east-2:123456789012:deliverystream/my-stream'}})
send.aws.kinesis_data_stream
Puts data into an AWS Kinesis Data Stream stream.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
aws.arn | string | AWS resource (Kinesis Data Stream) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
use_batch_key_as_partition_key | bool | Determines if the value retrieved using object.batch_key should be used as the Kinesis record's partition key.Defaults to false (partition key is a random UUID). | No |
enable_record_aggregation | boolean | Determines if records should be aggregated using the Kinesis Producer Library. Defaults to false (no aggregation is used). | No |
Example
sub.transform.send.aws.kinesis_data_stream({
aws: { arn: 'arn:aws:kinesis:us-east-2:123456789012:stream/my-stream'},
})
sub.tf.send.aws.kinesis_data_stream({ aws: { arn: 'arn:aws:kinesis:us-east-2:123456789012:stream/my-stream'}})
send.aws.lambda
Asynchronously invokes and sends data as a payload to an AWS Lambda function.
If you need to synchronously invoke a Lambda function, then use the enrich AWS Lambda transform.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
aws.arn | string | AWS resource (Lambda function) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
Example
sub.transform.send.aws.lambda({
aws: { arn: 'arn:aws:lambda:us-east-2:123456789012:function/my-func'},
})
sub.tf.send.aws.lambda({aws: { arn: 'arn:aws:lambda:us-east-2:123456789012:function/my-func'}})
send.aws.s3
Writes data as an object to an AWS S3 bucket.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.count | int | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | int | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
storage_class | string | Storage class (e.g., STANDARD , GLACIER_IR ) used by the object in the S3 bucket. Read more here.Defaults to STANDARD . | No |
aws.arn | string | AWS resource (S3 bucket) that is accessed. | Yes |
aws.role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
file_path | object | Determines how the name of the object is constructed. Defaults to year/month/day/uuid.extension . | No |
use_batch_key_as_prefix | bool | Determines if the value retrieved using object.batch_key should replace the prefix value in file_path .Defaults to false. | No |
Example
sub.transform.send.aws.s3({
aws: { arn: 'arn:aws:s3:::my-bucket'},
file_path: { prefix: 'prefix' } },
})
sub.tf.send.aws.s3(
sub.tf.send.aws.s3({
aws: { arn: 'arn:aws:s3:::my-bucket'},
file_path: { prefix: 'prefix' } },
})
send.aws.sns
Sends data to an AWS SNS topic.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
aws.arn | string | AWS resource (SNS topic) that is accessed. | Yes |
aws.assume_role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
Example
sub.transform.send.aws.sns({
aws: { arn: 'arn:aws:sns:us-east-1:123456789012:substation' },
})
sub.tf.send.aws.sns({aws: {arn: 'arn:aws:sns:us-east-1:123456789012:substation' }})
send.aws.sqs
Sends data to an AWS SQS queue.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
aws.arn | string | AWS resource (SQS queue) that is accessed. | Yes |
aws.role_arn | string | AWS role that is used to authenticate. Defaults to an empty string (no role assumption is used). | No |
retry.count | integer | Maximum number of times to retry sending data to the SQS queue. Defaults to the AWS_MAX_ATTEMPTS environment variable. | No |
Example
sub.transform.send.aws.sns({
aws: {arn: 'arn:aws:sqs:us-east-1:123456789012:substation' },
})
sub.tf.send.aws.sqs({aws: {arn: 'arn:aws:sqs:us-east-1:123456789012:substation' }})
send.file
Writes data to a file.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.count | integer | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | integer | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
file_path | object | Determines how the name of the object is constructed. Defaults to year/month/day/uuid.extension . | No |
use_batch_key_as_prefix | bool | Determines if the value retrieved using object.batch_key should replace the prefix value in file_path .Defaults to false. | No |
Example
sub.transform.send.file()
sub.tf.send.file()
send.http.post
POSTs data to an HTTP(S) URL.
Settings
Field | Type | Description | Required |
---|---|---|---|
url | string | The HTTP(S) URL used in the POST request. URLs support loading secrets. | Yes |
batch.count | integer | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | integer | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
headers | []object | An array of objects that contain HTTP headers sent in the request. Header values support loading secrets. Defaults to an empty object (no headers are used). | No |
Example
sub.transform.send.http.post(
settings={ url: 'api.foo.com' }
),
sub.tf.send.http.post({ url: 'api.foo.com' })
send.stdout
Sends data to stdout.
Settings
Field | Type | Description | Required |
---|---|---|---|
batch.count | integer | Maximum number of items to batch before emitting a new array. Defaults to 1,000 items. | No |
batch.size | integer | Maximum size (in bytes) of items to batch before emitting a new array. Defaults to 1MB. | No |
batch.duration | string | Maximum duration to batch items before emitting a new array. Defaults to 1m. | No |
auxiliary_transforms | []object | Transforms that are applied to batched data in a sub-pipeline before sending data externally. Defaults to an empty list (no additional transformation is applied). | No |
object.batch_key | string | Retrieves a value from an object that is used to organize batched data. No default, all data is batched into the same array. | No |
Example
sub.transform.send.stdout()
sub.tf.send.stdout()
File-Based Send Transforms
Send transforms that deliver file-like objects have specific settings that determine the path, format, and compression for each file.
file_path Settings
Determines how the name of the file is constructed.
Field | Type | Description | Required |
---|---|---|---|
prefix | string | String value that is prepended to the file path. | No |
time_format | string | Inserts a formatted datetime string into the file path. Must be one of: - pattern-based layouts - unix: epoch (supports fractions of a second) - unix_milli: epoch milliseconds | No |
uuid | boolean | Inserts a random UUID into the file path. In most configurations, this becomes the file name. | No |
suffix | string | String value that is appended to the file name. | No |
Use Cases
Random, Date-Based Files
{
// creates the file pattern `year/month/day/uuid.extension`
file_path: {
time_format: '2006/01/02',
uuid: true,
}
}
Updated 2 months ago