CLI Reference

kafkaconnect

Command-line interface for kafkaconnect.

kafkaconnect is a Connect API client that helps to configure and manage Kafka connectors.

kafkaconnect [OPTIONS] COMMAND [ARGS]...

Options

-b, --broker <broker_url>

Kafka Broker URL. Alternatively set via $KAFKA_BROKER_URL env var.

Default:

localhost:9092

-c, --connect <connect_url>

Kafka Connect URL. Alternatively set via $KAFKA_CONNECT_URL env var.

Default:

http://localhost:8083

-u, --username <sasl_plain_username>

Username for SASL authentication.

-p, --password <sasl_plain_password>

Password for SASL authentication.

--version

Show the version and exit.

Environment variables

KAFKA_BROKER_URL

Provide a default for -b

KAFKA_CONNECT_URL

Provide a default for -c

KAFKA_USERNAME

Provide a default for -u

KAFKA_PASSWORD

Provide a default for -p

config

Get the connector configuration.

kafkaconnect config [OPTIONS] NAME

Arguments

NAME

Required argument

create

Create a new connector.

Each subcommand creates a different connector.

kafkaconnect create [OPTIONS] COMMAND [ARGS]...

influxdb-sink

Create an instance of the InfluxDB Sink connector.

A list of topics can be specified using the TOPICLIST argument. If not, topics are discovered from Kafka. Use the --topic-regex and --excluded_topics options to help in selecting the topics that you want to write to InfluxDB. To check for new topics and update the connector configuration use the --auto-update and --check-interval options.

kafkaconnect create influxdb-sink [OPTIONS] [TOPICLIST]...

Options

-n, --name <name>

Name of the connector.The connector name must be unique accross the cluster.Alternatively set via the $KAFKA_CONNECT_NAME env var.

Default:

influxdb-sink

-i, --influxdb_url <connect_influx_url>

InfluxDB connection URL. Alternatively set via the $KAFKA_CONNECT_INFLUXDB_URL env var.

Default:

http://localhost:8086

-d, --database <connect_influx_db>

InfluxDB database name. The database must exist at InfluxDB. Alternatively set via the $KAFKA_CONNECT_DATABASE env var.

Default:

mydb

-t, --tasks-max <tasks_max>

Number of Kafka Connect tasks. Alternatively set via the $KAFKA_CONNECT_TASKS_MAX env var.

Default:

1

-u, --username <connect_influx_username>

InfluxDB username. Alternatively set via the $KAFKA_CONNECT_INFLUXDB_USERNAME env var. Use ‘-’ for unauthenticated users.

Default:

-

-p, --password <connect_influx_password>

InfluxDB password. Alternatively set via the $KAFKA_CONNECT_INFLUXDB_PASSWORD env var.

Default:

-r, --topic-regex <topic_regex>

Regex for selecting topics. Alternatively set via the $KAFKA_CONNECT_TOPIC_REGEX env var.

Default:

.*

--dry-run

Show the InfluxDB Sink Connector configuration but do not create the connector.

--auto-update

Check for new topics and update the connector. See also the –check-interval option.

-v, --validate

Validate the connector configuration before creating.

-c, --check-interval <check_interval>

The interval, in milliseconds, to check for new topics and updatethe connector. Alternatively set via the $KAFKA_CONNECT_CHECK_INTERVAL env var.

Default:

15000

-e, --excluded_topic_regex <excluded_topic_regex>

Regex for excluding topics. Alternatively set via the $KAFKA_CONNECT_EXCLUDED_TOPIC_REGEX env var.

Default:

--error-policy <connect_influx_error_policy>

Specifies the action to be taken if an error occurs while inserting the data. There are three available options, NOOP, the error is swallowed, THROW, the error is allowed to propagate and RETRY. For RETRY the Kafka message is redelivered up to a maximum number of times specified by the --max-retries option. The retry interval is specified by the --retry-interval option. Alternatively set via the $KAFKA_CONNECT_ERROR_POLICY env var.

Default:

THROW

Options:

NOOP | THROW | RETRY

--max-retries <connect_influx_max_retries>

The maximum number of times a message is retried. Only valid when the --error-policy is set to RETRY. Alternatively set via the $KAFKA_CONNECT_MAX_RETRIES env var.

Default:

10

--retry-interval <connect_influx_retry_interval>

The interval, in milliseconds between retries. Only valid when the --error-policy is set to RETRY. Alternatively set via the $KAFKA_CONNECT_RETRY_INTERVAL env var.

Default:

60000

--progress-enabled <connect_progress_enabled>

Enables the output for how many records have been processed. Alternatively set via the $KAFKA_CONNECT_PROGRESS_ENABLED env var.

Default:

false

--timestamp <timestamp>

Timestamp to use as the InfluxDB time.

Default:

sys_time()

--tags <tags>

Fields in the Avro payload that are treated as InfluxDB tags.

Default:

--remove-prefix <remove_prefix>

Prefix to remove from topic name to use as measurement name.

Default:

Arguments

TOPICLIST

Optional argument(s)

Environment variables

KAFKA_CONNECT_NAME

Provide a default for -n

KAFKA_CONNECT_INFLUXDB_URL

Provide a default for -i

KAFKA_CONNECT_DATABASE

Provide a default for -d

KAFKA_CONNECT_TASKS_MAX

Provide a default for -t

KAFKA_CONNECT_INFLUXDB_USERNAME

Provide a default for -u

KAFKA_CONNECT_INFLUXDB_PASSWORD

Provide a default for -p

KAFKA_CONNECT_TOPIC_REGEX

Provide a default for -r

KAFKA_CONNECT_CHECK_INTERVAL

Provide a default for -c

KAFKA_CONNECT_EXCLUDED_TOPIC_REGEX

Provide a default for -e

KAFKA_CONNECT_ERROR_POLICY

Provide a default for --error-policy

KAFKA_CONNECT_MAX_RETRIES

Provide a default for --max-retries

KAFKA_CONNECT_RETRY_INTERVAL

Provide a default for --retry-interval

KAFKA_CONNECT_PROGRESS_ENABLED

Provide a default for --progress-enabled

KAFKA_CONNECT_INFLUXDB_TIMESTAMP

Provide a default for --timestamp

KAFKA_CONNECT_INFLUXDB_TAGS

Provide a default for --tags

KAFKA_CONNECT_INFLUXDB_REMOVE_PREFIX

Provide a default for --remove-prefix

jdbc-sink

Create an instance of the JDBC Sink connector.

Use the –show-status option to output status.

kafkaconnect create jdbc-sink [OPTIONS] CONFIGFILE

Options

-n, --name <name>

Name of the JDBC Sink connector. If provided it overrides the name property value in the configuration file.

--dry-run

Validates the connector configuration without creating.

--show-status

Show connector status in the output. See also the --show-status-interval option.

--show-status-interval <show_status_interval>

The time interval in milliseconds to output the connector status.

Default:

15000

Arguments

CONFIGFILE

Required argument

mirrormaker2

Create an instance of the MirrorMaker 2 connectors.

Create the heartbeat, checkpoint and mirror-source connectors. Use the –show-status option to output status.

kafkaconnect create mirrormaker2 [OPTIONS]

Options

-n, --name <name>

Name of the MirroMaker 2 instance. If provided it is used as a prefix to name the heartbeat, checkpoint and mirror-source connectors and it overrides the name property value in the configuration file.

-h, --heartbeat <heartbeat_configfile>

Required Hertbeat connector configuration file.

-c, --checkpoint <checkpoint_configfile>

Required Checkpoint connector configuration file.

-m, --mirror-source <mirror_source_configfile>

Required MirrorSource connector configuration file.

--dry-run

Validates the connector configuration without creating.

--show-status

Show connector status in the output. See also the --show-status-interval option.

--show-status-interval <show_status_interval>

The time interval in milliseconds to output the connector status.

Default:

15000

s3-sink

Create an instance of the S3 Sink connector.

Use the –show-status option to output status.

kafkaconnect create s3-sink [OPTIONS] CONFIGFILE

Options

-n, --name <name>

Name of the S3 Sink connector. If provided it overrrides the name property value in the configuration file.

--aws-access-key-id <aws_access_key_id>

The AWS access key ID used to authenticate personal AWS credentials.

--aws-secret-access-key <aws_secret_access_key>

The secret access key used to authenticate personal AWS credentials.

--dry-run

Validates the connector configuration without creating.

--show-status

Show connector status in the output. See also the --show-status-interval option.

--show-status-interval <show_status_interval>

The time interval in milliseconds to output the connector status.

Default:

15000

Arguments

CONFIGFILE

Required argument

Environment variables

AWS_ACCESS_KEY_ID

Provide a default for --aws-access-key-id

AWS_SECRET_ACCESS_KEY

Provide a default for --aws-secret-access-key

delete

Delete a connector.

Halt tasks and remove the connector configuration.

kafkaconnect delete [OPTIONS] NAME

Arguments

NAME

Required argument

help

Show help for any command.

kafkaconnect help [OPTIONS] [TOPIC]

Arguments

TOPIC

Optional argument

info

Get information about the connector.

kafkaconnect info [OPTIONS] NAME

Arguments

NAME

Required argument

list

Get a list of active connectors.

kafkaconnect list [OPTIONS]

pause

Pause the connector and its tasks.

kafkaconnect pause [OPTIONS] NAME

Arguments

NAME

Required argument

plugins

Get a list of connector plugins available in the Connect cluster.

kafkaconnect plugins [OPTIONS]

restart

Restart a connector and its tasks.

kafkaconnect restart [OPTIONS] NAME

Arguments

NAME

Required argument

resume

Resume a paused connector.

kafkaconnect resume [OPTIONS] NAME

Arguments

NAME

Required argument

status

Get the connector status.

kafkaconnect status [OPTIONS] NAME

Arguments

NAME

Required argument

tasks

Get a list of tasks currently running for the connector.

kafkaconnect tasks [OPTIONS] NAME

Arguments

NAME

Required argument

topics

Get the list of topic names used by the connector.

kafkaconnect topics [OPTIONS] NAME

Arguments

NAME

Required argument

upload

Upload the connector configuration from a file.

kafkaconnect upload [OPTIONS] CONFIGFILE

Options

-n, --name <name>

Required Name of the connector.

--dry-run

Validate the connector configuration without uploading.

Arguments

CONFIGFILE

Required argument