listeners=INTERNAL://192.1.1.8:9092,EXTERNAL://10.1.1.5:9093,CONTROLLER://192.1.1.8:9094
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:SSL,CONTROLLER:SSL
control.plane.listener.name=CONTROLLER
Kafka Properties
Name | Description | ||
---|---|---|---|
|
Comma-separated list of URIs to publish to ZooKeeper for clients to use, if different than the listeners config property. Default: In IaaS environments (e.g. Docker, Kubernetes, a cloud), Unlike listeners it is invalid to advertise the
|
||
|
Fully-qualified class name of the Authorizer for request authorization Default: (empty) Supports authorizers that implement the deprecated
|
||
|
How often (in milliseconds) consumer offsets should be auto-committed when enable.auto.commit is enabled |
||
|
Default:
|
||
|
Default: A background thread checks and triggers leader balance if required.
|
||
|
Reset policy — what to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
Default: |
||
|
Default: Must be at least |
||
|
A comma-separated list of Default: |
||
|
The broker id of a Kafka broker for identification purposes If unset, a unique broker id will be generated. To avoid conflicts between zookeeper generated broker id’s and user configured broker id’s, generated broker IDs start from reserved.broker.max.id + 1. Use KafkaConfig.brokerId to access the current value. |
||
|
Default: When enabled (
|
||
|
|||
|
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance. Use |
||
|
Default: |
||
|
Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout. Default: Has to be at least |
||
|
|||
|
Default: Must be defined in listener.security.protocol.map
Must be different than interBrokerListenerName Broker will use the name to locate the endpoint in listeners, to listen for connections from the controller. For example, if a broker’s config is: On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On controller side, when it discovers a broker’s published endpoints through zookeeper, it will use the name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker’s published endpoints on zookeeper are:
and the controller’s config is:
then a controller will use "broker1:9094" with security protocol "SSL" to connect to the broker.
|
||
|
|||
|
|||
|
|||
|
When enabled (i.e. Default: When disabled, offsets have to be committed manually (synchronously using KafkaConsumer.commitSync or asynchronously KafkaConsumer.commitAsync). On restart restore the position of a consumer using KafkaConsumer.seek. Used when |
||
|
The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel. Use |
||
|
The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy the requirement given by fetch.min.bytes. Use |
||
|
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency. Use |
||
|
|||
|
|||
|
The expected time between heartbeats to the group coordinator when using Kafka’s group management facilities. |
||
|
|||
|
Name of the listener for inter-broker communication (resolved per listener.security.protocol.map) Default: security.inter.broker.protocol Must be among advertised.listeners It is an error to use together with security.inter.broker.protocol
|
||
|
Default: the latest Typically bumped up after all brokers were upgraded to a new version
|
||
|
Comma-separated list of ConsumerInterceptor class names. Default: |
||
|
|||
|
How often the active KafkaController schedules the auto-leader-rebalance-task (aka AutoLeaderRebalance or AutoPreferredReplicaLeaderElection or auto leader balancing) Default:
|
||
|
Allowed ratio of leader imbalance per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. Default:
|
||
|
Default: Use
|
||
|
Map of listener names and security protocols (key and value are separated by a colon and map entries are separated by commas). Each listener name should only appear once in the map. Default: Map with This map must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name
|
||
|
|||
|
|||
|
Enables the log cleaner process to run on a Kafka broker ( Default:
|
||
|
|||
|
|||
|
|||
|
|||
|
Default: delete Included in copyKafkaConfigToLog (to set cleanup.policy of topics)
|
||
|
|||
|
Default: log.dir Use KafkaConfig.logDirs to access the current value. |
||
|
Number of messages written to a log partition is kept in memory before flushing to disk (by forcing an fsync) Default: E.g. if this was set to It is recommended not setting this and using replication for durability and allowing the operating system’s background flush capabilities as it is more efficient. Must be at least Topic-level configuration: flush.messages
|
||
|
How long (in millis) a message written to a log partition is kept in memory before flushing to disk (by forcing an fsync). If not set, the value in log.flush.scheduler.interval.ms is used. Default: E.g. if this was set to Used exclusively when It is recommended not setting this and using replication for durability and allowing the operating system’s background flush capabilities as it is more efficient. Must be undefined or at least Topic-level configuration: flush.messages
|
||
|
|||
|
|||
|
|||
|
Maximum size (in bytes) of the offset index file (that maps offsets to file positions). It is preallocated and shrinked only after log rolls. Default: You generally should not need to change this setting. Must be at least
|
||
|
Maximum size of a partition (which consists of log segments) to grow before discarding old segments and free up space.
Default: Must be at least |
||
|
How often (in millis) the Default: Must be at least
|
||
|
Default: Must be at least Unless set, the value of log.retention.minutes is used. |
||
|
Unless set, the value of log.retention.hours is used. Secondary to the log.retention.ms. |
||
|
Considered the last unless log.retention.ms and log.retention.minutes were set. |
||
|
Time (in millis) after which Kafka forces the log to roll even if the segment file isn’t full to ensure that retention can delete or compact old data. Default: Must be at least
|
||
|
The maximum size of a segment file of logs. Retention and cleaning are always done one file at a time so a larger segment size means fewer files but less granular control over retention. Default: Must be at least Use KafkaConfig.logSegmentBytes to access the current value. |
||
|
|||
|
The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). Use
|
||
|
(KafkaConsumer) The maximum number of records returned from a Kafka The default setting (
From kafka-clients mailing list:
Use |
||
|
|||
|
|||
|
The list of fully-qualified classes names of the metrics reporters. Default: JmxReporter |
||
|
|||
|
Time window (in milliseconds) a metrics sample is computed over. |
||
|
The minimum number of replicas in ISR that is needed to commit a produce request with Default: Must be at least When a Kafka producer sets acks to If this minimum cannot be met, then the producer will raise an exception (either Used together with acks allows you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set
|
||
|
The number of threads that KafkaServer uses for processing requests, which may include disk I/O Default: Must be at least |
||
|
Default: Must be at least |
||
|
A comma-separated list of per-ip or hostname overrides to the default maximum number of connections, e.g. Default: (empty) |
||
|
The number of threads that SocketServer uses for the number of processors per endpoint (for receiving requests from the network and sending responses to the network) Default: Must be at least |
||
|
The number of log partitions for auto-created topics Default: Increase the default value ( |
||
|
Default: Must be at least |
||
|
The number of threads that can move replicas between log directories, which may include disk I/O Default:
|
||
|
The number of fetcher threads that ReplicaFetcherManager uses for replicating messages from a source broker Default: The higher the value the higher degree of I/O parallelism in a follower broker.
|
||
|
The port a Kafka broker listens on Default: |
||
|
Fully-qualified name of KafkaPrincipalBuilder implementation to build the KafkaPrincipal object for authorization Default: Supports the deprecated If no principal builder is defined, the default behavior depends on the security protocol in use:
Used when
|
||
|
How long (in millis) a fetcher thread is going to sleep when there are no active partitions (while sending a fetch request) or after a fetch partition error and handlePartitionsWithErrors Default: Must be at least
|
||
|
Default: Must be at least This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
|
||
|
Default: Must be at least Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config).
|
||
|
|||
|
The maximum allowed time for each worker to join the group once a rebalance has begun. |
||
|
The hint about the size of the TCP network receive buffer (SO_RCVBUF) to use (for a socket) when reading data. If the value is -1, the OS default will be used. |
||
|
|||
|
How long to wait for a follower to consume up to the leader’s log end offset (LEO) before the leader removes the follower from the ISR of a partition Default:
|
||
|
Socket timeout of Default: Should always be at least replica.fetch.wait.max.ms to prevent unnecessary socket timeouts
|
||
|
The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. Use |
||
|
Maximum number that can be used for broker.id. Has to be at least Default: * Use * Use KafkaConfig.maxReservedBrokerId to access the current value |
||
|
Time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. Use |
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Default: Possible values:
It is an error when defined with inter.broker.listener.name (as it then should only be in listener.security.protocol.map). It is validated when the inter-broker communication uses a SASL protocol (
|
||
|
The hint about the size of the TCP network send buffer (SO_SNDBUF) to use (for a socket) when sending data. If the value is -1, the OS default will be used. |
||
|
The timeout used to detect worker failures. Default: |
||
|
|||
|
Default: The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. This configuration is ignored for a custom Used when
|
||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Default: Supported values (case-insensitive):
|
||
|
|||
|
If a client’s requested transaction time exceed this, then the broker will return an error in Default: Must be at least
|
||
|
Cluster-wide property: unclean.leader.election.enable Topic-level property: unclean.leader.election.enable |
||
|
|||
|
Comma-separated list of Zookeeper hosts (as Default: Zookeeper URIs can have an optional chroot path suffix at the end, e.g. If the optional chroot path suffix is used, all paths are relative to this path. It is recommended to include all the hosts in a Zookeeper ensemble (cluster)
|
||
|
Default: zookeeper.session.timeout.ms
|
||
|
The maximum number of unacknowledged requests the client will send to Zookeeper before blocking. Has to be at least 1 Default:
|
||
|
Default:
|
||
|
Default:
|