brokerDescription0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
advertised.listenersListeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this ..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
alter.config.policy.class.nameThe alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.A..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
alter.log.dirs.replication.quota.window.numThe number of samples to retain in memory for alter log dirs replication quotas11111111111111111111111111
alter.log.dirs.replication.quota.window.size.secondsThe time span of each sample for alter log dirs replication quotas1111111111111
authorizer.class.nameThe fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the ..
auto.create.topics.enableEnable auto creation of topic on the servertruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
auto.leader.rebalance.enableEnables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable..falsetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
background.threadsThe number of threads to use for various background processing tasks41010101010101010101010101010101010101010
broker.heartbeat.interval.msThe length of time in milliseconds between broker heartbeats. Used when running in KRaft mode.2000 (2 seconds)2000 (2 seconds)2000 (2 seconds)
broker.idThe broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broke..nullnullnull-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
broker.id.generation.enableEnable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be review..truetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
broker.rackRack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
broker.session.timeout.msThe length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode.9000 (9 seconds)9000 (9 seconds)9000 (9 seconds)
client.quota.callback.classThe fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits app..nullnullnullnullnullnullnullnullnullnullnullnull
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..producerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducer
connection.failed.authentication.delay.msConnection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on a..100100100100100100100100100100100
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.600000600000600000600000600000600000600000600000600000600000600000600000600000600000600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)
connections.max.reauth.msWhen explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the co..0000000000
control.plane.listener.nameName of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate ..nullnullnullnullnullnullnullnullnullnull
controlled.shutdown.enableEnable controlled shutdown of the serverfalsefalsetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
controlled.shutdown.max.retriesControlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens3333333333333333333333
controlled.shutdown.retry.backoff.msBefore each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica..50005000500050005000500050005000500050005000500050005000500050005000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)
controller.listener.namesA comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. When commu..nullnullnull
controller.quorum.append.linger.msThe duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk.252525
controller.quorum.election.backoff.max.msMaximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps pr..1000 (1 second)1000 (1 second)1000 (1 second)
controller.quorum.election.timeout.msMaximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election1000 (1 second)1000 (1 second)1000 (1 second)
controller.quorum.fetch.timeout.msMaximum time without a successful fetch from the current leader before becoming a candidate and triggering a election for voters; ..2000 (2 seconds)2000 (2 seconds)2000 (2 seconds)
controller.quorum.request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..2000 (2 seconds)2000 (2 seconds)2000 (2 seconds)
controller.quorum.retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..202020
controller.quorum.votersMap of id/endpoint information for the set of voters in a comma-separated list of `{id}@{host}:{port}` entries. For example: `1@lo..
controller.quota.window.numThe number of samples to retain in memory for controller mutation quotas1111111111
controller.quota.window.size.secondsThe time span of each sample for controller mutations quotas11111
controller.socket.timeout.msThe socket timeout for controller-to-broker channels3000030000300003000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
create.topic.policy.class.nameThe create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.Cr..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
default.replication.factorThe default replication factors for automatically created topics1111111111111111111111
delegation.token.expiry.check.interval.msScan interval to remove expired delegation tokens.36000003600000360000036000003600000360000036000003600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)
delegation.token.expiry.time.msThe token validity time in miliseconds before the token needs to be renewed. Default value 1 day.8640000086400000864000008640000086400000864000008640000086400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)
delegation.token.master.keyDEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config.nullnullnullnullnullnullnullnullnullnullnullnullnull
delegation.token.max.lifetime.msThe token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.604800000604800000604800000604800000604800000604800000604800000604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)
delegation.token.secret.keySecret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not se..nullnullnullnull
delete.records.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the delete records request purgatory111111111111111
delete.topic.enableEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned offfalsefalsefalsefalsefalsefalsetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
fetch.max.bytesThe maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..5767168057671680 (55 mebibytes)57671680 (55 mebibytes)57671680 (55 mebibytes)57671680 (55 mebibytes)57671680 (55 mebibytes)57671680 (55 mebibytes)
fetch.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the fetch request purgatory100001000010001000100010001000100010001000100010001000100010001000100010001000100010001000
group.initial.rebalance.delay.msThe amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A..3000300030003000300030003000300030003000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)
group.max.session.timeout.msThe maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in betw..300003000003000003000003000003000003000003000003000003000001800000180000018000001800000 (30 minutes)1800000 (30 minutes)1800000 (30 minutes)1800000 (30 minutes)1800000 (30 minutes)1800000 (30 minutes)
group.max.sizeThe maximum number of consumers that a single consumer group can accommodate.2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
group.min.session.timeout.msThe minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of ..60006000600060006000600060006000600060006000600060006000 (6 seconds)6000 (6 seconds)6000 (6 seconds)6000 (6 seconds)6000 (6 seconds)6000 (6 seconds)
initial.broker.registration.timeout.msWhen initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the..60000 (1 minute)60000 (1 minute)60000 (1 minute)
inter.broker.listener.nameName of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.p..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
inter.broker.protocol.versionSpecify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new..0.9.0.X0.10.0-IV10.10.1-IV20.10.2-IV00.11.0-IV21.0-IV01.1-IV02.0-IV12.1-IV22.2-IV12.3-IV12.4-IV12.5-IV02.6-IV02.7-IV22.8-IV13.0-IV13.1-IV03.2-IV0
kafka.metrics.polling.interval.secsThe metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.1010101010101010101010
kafka.metrics.reportersA list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter tra..
leader.imbalance.check.interval.secondsThe frequency with which the partition rebalance check is triggered by the controller300300300300300300300300300300300300300300300300300300300300300
leader.imbalance.per.broker.percentageThe ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per br..101010101010101010101010101010101010101010
listener.security.protocol.mapMap between listener names and security protocols. This must be defined for the same security protocol to be usable in more than o..SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXTSSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXTPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXTPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSLPLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listenersList of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify hostname as 0.0.0.0 ..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullPLAINTEXT://:9092PLAINTEXT://:9092PLAINTEXT://:9092
log.cleaner.backoff.msThe amount of time to sleep when there are no logs to clean15000150001500015000150001500015000150001500015000150001500015000150001500015000 (15 seconds)15000 (15 seconds)15000 (15 seconds)15000 (15 seconds)15000 (15 seconds)15000 (15 seconds)
log.cleaner.dedupe.buffer.sizeThe total memory used for log deduplication across all cleaner threads500*1024*1024500*1024*1024134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728134217728
log.cleaner.delete.retention.msThe amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in whi..1 day1 day8640000086400000864000008640000086400000864000008640000086400000864000008640000086400000864000008640000086400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)
log.cleaner.enableEnable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including..falsefalsetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
log.cleaner.io.buffer.load.factorLog cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be ..0.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.90.9
log.cleaner.io.buffer.sizeThe total memory used for log cleaner I/O buffers across all cleaner threads512*1024512*1024524288524288524288524288524288524288524288524288524288524288524288524288524288524288524288524288524288524288524288
log.cleaner.io.max.bytes.per.secondThe log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on averageNoneDouble.MaxValue1.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E3081.7976931348623157E308
log.cleaner.max.compaction.lag.msThe maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
log.cleaner.min.cleanable.ratioThe minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the lo..0.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.50.5
log.cleaner.min.compaction.lag.msThe minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.00000000000000000
log.cleaner.threadsThe number of background threads to use for log cleaning111111111111111111111
log.cleanup.policyThe default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are:..deletedeletedeletedelete[delete]deletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedelete
log.dirThe directory in which the log data is kept (supplemental for log.dirs property)none/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logs
log.dirsThe directories in which the log data is kept. If not set, the value in log.dir is used/tmp/kafka-logs/tmp/kafka-logs/tmp/kafka-logsnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.flush.interval.messagesThe number of messages accumulated on a log partition before messages are flushed to disk10000NoneLong.MaxValue9223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
log.flush.interval.msThe maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.sc..3000NoneLong.MaxValuenullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.flush.offset.checkpoint.interval.msThe frequency with which we update the persistent record of the last flush which acts as the log recovery point60000600006000060000600006000060000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
log.flush.scheduler.interval.msThe frequency in ms that the log flusher checks whether any log needs to be flushed to disk30003000Long.MaxValue9223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
log.flush.start.offset.checkpoint.interval.msThe frequency with which we update the persistent record of log start offset60000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
log.index.interval.bytesThe interval with which we add an entry to the offset index40964096409640964096409640964096409640964096409640964096409640964096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)
log.index.size.max.bytesThe maximum size in bytes of the offset index10 * 1024 * 102410 * 1024 * 102410 * 1024 * 10241048576010485760104857601048576010485760104857601048576010485760104857601048576010485760104857601048576010485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)
log.message.downconversion.enableThis configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..truetruetruetruetruetruetruetruetruetruetruetrue
log.message.format.versionSpecify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Som..0.10.0-IV10.10.1-IV20.10.2-IV00.11.0-IV21.0-IV01.1-IV02.0-IV12.1-IV22.2-IV12.3-IV12.4-IV12.5-IV02.6-IV02.7-IV22.8-IV13.0-IV13.0-IV13.0-IV1
log.message.timestamp.difference.max.msThe maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. ..922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
log.message.timestamp.typeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or ..CreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTime
log.preallocateShould pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
log.retention.bytesThe maximum size of the log before deleting it-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
log.retention.check.interval.msThe frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion3000005 minutes5 minutes300000300000300000300000300000300000300000300000300000300000300000300000300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
log.retention.hoursThe number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property16824 * 7168168168168168168168168168168168168168168168168168168168
log.retention.minutesThe number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the ..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.retention.msThe number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.roll.hoursThe maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property24 * 724 * 724 * 7168168168168168168168168168168168168168168168168168168168
log.roll.jitter.hoursThe maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property0000000000000000000
log.roll.jitter.msThe maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is usednullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.roll.msThe maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is usednullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
log.segment.bytesThe maximum size of a single log file1024 * 1024 * 10241024 * 1024 * 10241024 * 1024 * 102410737418241073741824107374182410737418241073741824107374182410737418241073741824107374182410737418241073741824107374182410737418241073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)
log.segment.delete.delay.msThe amount of time to wait before deleting a file from the filesystem600006000060000600006000060000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
max.connection.creation.rateThe maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing..21474836472147483647214748364721474836472147483647
max.connectionsThe maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits confi..214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
max.connections.per.ipThe maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max...Int.MaxValue2147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
max.connections.per.ip.overridesA comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName..null
max.incremental.fetch.session.cache.slotsThe maximum number of incremental fetch sessions that we will maintain.1000100010001000100010001000100010001000100010001000
message.max.bytesThe largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..1000000100000010000001000012100001210000121000012100001210000121000012100001210000121000012100001210000121048588104858810485881048588104858810485881048588
metadata.log.dirThis configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is plac..nullnullnull
metadata.log.max.record.bytes.between.snapshotsThis is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new s..209715202097152020971520
metadata.log.segment.bytesThe maximum size of a single metadata log file.1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)
metadata.log.segment.msThe maximum time before a new metadata log file is rolled out (in milliseconds).604800000 (7 days)604800000 (7 days)604800000 (7 days)
metadata.max.retention.bytesThe maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapsh..-1-1-1
metadata.max.retention.msThe number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist befo..604800000 (7 days)604800000 (7 days)604800000 (7 days)
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[][][]
metrics.num.samplesThe number of samples maintained to compute metrics.2222222222222222222
metrics.recording.levelThe highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.msThe window of time a metrics sample is computed over.3000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
min.insync.replicasWhen a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..1111111111111111111
node.idThe node ID associated with the roles this process is playing when `process.roles` is non-empty. This is required configuration wh..-1-1-1
num.io.threadsThe number of threads that the server uses for processing requests, which may include disk I/O8888888888888888888888
num.network.threadsThe number of threads that the server uses for receiving requests from the network and sending responses to the network3333333333333333333333
num.partitionsThe default number of log partitions per topic11111111111111111111111
num.recovery.threads.per.data.dirThe number of threads per data directory to be used for log recovery at startup and flushing at shutdown11111111111111111111
num.replica.alter.log.dirs.threadsThe number of threads that can move replicas between log directories, which may include disk I/Onullnullnullnullnullnullnullnullnullnullnullnullnull
num.replica.fetchersNumber of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O pa..1111111111111111111111
offset.metadata.max.bytesThe maximum size for a metadata entry associated with an offset commit1024409640964096409640964096409640964096409640964096409640964096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)
offsets.commit.required.acksThe required acks before the commit can be accepted. In general, the default (-1) should not be overridden-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
offsets.commit.timeout.msOffset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is simi..500050005000500050005000500050005000500050005000500050005000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)
offsets.load.buffer.sizeBatch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too la..52428805242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880
offsets.retention.check.interval.msFrequency at which to check for stale offsets600000600000600000600000600000600000600000600000600000600000600000600000600000600000600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)
offsets.retention.minutesAfter a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before gett..1440144014401440144014401440100801008010080100801008010080100801008010080100801008010080
offsets.topic.compression.codecCompression codec for the offsets topic - compression may be used to achieve "atomic" commits0000000000000000000
offsets.topic.num.partitionsThe number of partitions for the offset commit topic (should not change after deployment)5050505050505050505050505050505050505050
offsets.topic.replication.factorThe replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the clus..33333333333333333333
offsets.topic.segment.bytesThe offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)
password.encoder.cipher.algorithmThe Cipher algorithm used for encoding dynamically configured passwords.AES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5PaddingAES/CBC/PKCS5Padding
password.encoder.iterationsThe iteration count used for encoding dynamically configured passwords.4096409640964096409640964096409640964096409640964096
password.encoder.key.lengthThe key length used for encoding dynamically configured passwords.128128128128128128128128128128128128128
password.encoder.keyfactory.algorithmThe SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available an..nullnullnullnullnullnullnullnullnullnullnullnullnull
password.encoder.old.secretThe old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If s..nullnullnullnullnullnullnullnullnullnullnullnullnull
password.encoder.secretThe secret used for encoding dynamically configured passwords for this broker.nullnullnullnullnullnullnullnullnullnullnullnullnull
principal.builder.classThe fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal..class org.apache.kafka.common.security.auth.DefaultPrincipalBuilderclass org.apache.kafka.common.security.auth.DefaultPrincipalBuilderclass org.apache.kafka.common.security.auth.DefaultPrincipalBuilderorg.apache.kafka.common.security.auth.DefaultPrincipalBuilderorg.apache.kafka.common.security.auth.DefaultPrincipalBuildernullnullnullnullnullnullnullnullnullnullnullorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilderorg.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
process.rolesThe roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applic..
producer.purgatory.purge.interval.requestsThe purge interval (in number of requests) of the producer request purgatory100001000010001000100010001000100010001000100010001000100010001000100010001000100010001000
queued.max.request.bytesThe number of queued bytes allowed before no more requests are read-1-1-1-1-1-1-1-1-1-1-1-1-1-1
queued.max.requestsThe number of queued requests allowed for data-plane, before blocking the network threads500500500500500500500500500500500500500500500500500500500500500500
quota.window.numThe number of samples to retain in memory for client quotas11111111111111111111111111111111111111
quota.window.size.secondsThe time span of each sample for client quotas1111111111111111111
replica.fetch.backoff.msThe amount of time to sleep when fetch partition error occurs.10001000100010001000100010001000100010001000100010001000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)
replica.fetch.max.bytesThe number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch..1024 * 10241024 * 10241024 * 102410485761048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)
replica.fetch.min.bytesMinimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config).1111111111111111111111
replica.fetch.response.max.bytesMaximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first n..104857601048576010485760104857601048576010485760104857601048576010485760104857601048576010485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)
replica.fetch.wait.max.msThe maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag...500500500500500500500500500500500500500500500500500500500500500500
replica.high.watermark.checkpoint.interval.msThe frequency with which the high watermark is saved out to disk50005000500050005000500050005000500050005000500050005000500050005000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)
replica.lag.time.max.msIf a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leade..1000010000100001000010000100001000010000100001000010000100001000010000100003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
replica.selector.classThe fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By ..nullnullnullnullnullnullnullnull
replica.socket.receive.buffer.bytesThe socket receive buffer for network requests64 * 102464 * 102464 * 10246553665536655366553665536655366553665536655366553665536655366553665536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)
replica.socket.timeout.msThe socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms30 * 100030 * 100030 * 10003000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
replication.quota.window.numThe number of samples to retain in memory for replication quotas1111111111111111111111111111111111
replication.quota.window.size.secondsThe time span of each sample for replication quotas11111111111111111
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..3000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
reserved.broker.max.idMax number that can be used for a broker.id1000100010001000100010001000100010001000100010001000100010001000100010001000
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.enabled.mechanismsThe list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is avail..[GSSAPI][GSSAPI]GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmdKerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.principal.to.local.rulesA list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in..[DEFAULT][DEFAULT][DEFAULT]DEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULT
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnull
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnull
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..606060606060606060606060
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000 (10 seconds)10000 (10 seconds)
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100100
sasl.mechanism.controller.protocolSASL mechanism used for communication with controllers. Default is GSSAPI.GSSAPIGSSAPIGSSAPI
sasl.mechanism.inter.broker.protocolSASL mechanism used for inter-broker communication. Default is GSSAPI.GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.3030
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnull
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnull
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000 (1 hour)3600000 (1 hour)
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000 (10 seconds)10000 (10 seconds)
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100100
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnull
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnull
sasl.server.callback.handler.classThe fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server..nullnullnullnullnullnullnullnullnullnullnullnull
security.inter.broker.protocolSecurity protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error ..PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnull
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..127000 (127 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)
socket.listen.backlog.sizeThe maximum number of pending connections on the socket. In Linux, you may also need to configure `somaxconn` and `tcp_max_syn_bac..50
socket.receive.buffer.bytesThe SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.100 * 1024100 * 1024100 * 1024102400102400102400102400102400102400102400102400102400102400102400102400102400102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)
socket.request.max.bytesThe maximum number of bytes in a socket request100 * 1024 * 1024100 * 1024 * 1024100 * 1024 * 1024104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)
socket.send.buffer.bytesThe SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.100 * 1024100 * 1024100 * 1024102400102400102400102400102400102400102400102400102400102400102400102400102400102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)102400 (100 kibibytes)
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnull
ssl.client.authConfigures kafka broker to request client authentication. The following settings are common:nonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenone
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..[TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1]TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2,TLSv1.3TLSv1.2
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.nullnullnullnullnullnullnullhttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnull
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'. This is required for clients..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnull
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnull
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.typeThe file format of the key store file. This is optional for client.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.principal.mapping.rulesA list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order an..DEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULTDEFAULT
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.3TLSv1.2
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnull
ssl.truststore.locationThe location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.typeThe file format of the trust store file.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
transaction.abort.timed.out.transaction.cleanup.interval.msThe interval at which to rollback transactions that have timed out60000600006000060000600006000060000600001000010000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)
transaction.max.timeout.msThe maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an..900000900000900000900000900000900000900000900000900000900000 (15 minutes)900000 (15 minutes)900000 (15 minutes)900000 (15 minutes)900000 (15 minutes)900000 (15 minutes)
transaction.remove.expired.transaction.cleanup.interval.msThe interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing3600000360000036000003600000360000036000003600000360000036000003600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)3600000 (1 hour)
transaction.state.log.load.buffer.sizeBatch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, ov..524288052428805242880524288052428805242880524288052428805242880524288052428805242880524288052428805242880
transaction.state.log.min.isrOverridden min.insync.replicas config for the transaction topic.222222222222222
transaction.state.log.num.partitionsThe number of partitions for the transaction topic (should not change after deployment).505050505050505050505050505050
transaction.state.log.replication.factorThe replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the ..333333333333333
transaction.state.log.segment.bytesThe transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads104857600104857600104857600104857600104857600104857600104857600104857600104857600104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)104857600 (100 mebibytes)
transactional.id.expiration.msThe time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transac..604800000604800000604800000604800000604800000604800000604800000604800000604800000604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)
unclean.leader.election.enableIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..truetruetruetruetruefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.clientCnxnSocketTypically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value..nullnullnullnullnullnullnull
zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
zookeeper.connection.timeout.msThe max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms i..600060006000nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
zookeeper.max.in.flight.requestsThe maximum number of unacknowledged requests the client will send to Zookeeper before blocking.10101010101010101010101010
zookeeper.session.timeout.msZookeeper session timeout6000600060006000600060006000600060006000600060006000600060001800018000 (18 seconds)18000 (18 seconds)18000 (18 seconds)18000 (18 seconds)18000 (18 seconds)18000 (18 seconds)
zookeeper.set.aclSet client to use secure ACLsfalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.cipher.suitesSpecifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookee..nullnullnullnullnullnullnull
zookeeper.ssl.client.enableSet client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure syst..falsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.crl.enableSpecifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the z..falsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.enabled.protocolsSpecifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabl..nullnullnullnullnullnullnull
zookeeper.ssl.endpoint.identification.algorithmSpecifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" mean..HTTPSHTTPSHTTPSHTTPSHTTPSHTTPSHTTPS
zookeeper.ssl.keystore.locationKeystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..nullnullnullnullnullnullnull
zookeeper.ssl.keystore.passwordKeystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via th..nullnullnullnullnullnullnull
zookeeper.ssl.keystore.typeKeystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zo..nullnullnullnullnullnullnull
zookeeper.ssl.ocsp.enableSpecifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set vi..falsefalsefalsefalsefalsefalsefalse
zookeeper.ssl.protocolSpecifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zooke..TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2
zookeeper.ssl.truststore.locationTruststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.lo..nullnullnullnullnullnullnull
zookeeper.ssl.truststore.passwordTruststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.pa..nullnullnullnullnullnullnull
zookeeper.ssl.truststore.typeTruststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type s..nullnullnullnullnullnullnull
zookeeper.sync.time.msHow far a ZK follower can be behind a ZK leader20002000200020002000200020002000200020002000200020002000200020002000 (2 seconds)2000 (2 seconds)2000 (2 seconds)2000 (2 seconds)2000 (2 seconds)
advertised.host.nameDEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. Hostname to publish to..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
advertised.portDEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. The port to publish to..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
host.nameDEPRECATED: only used when listeners is not set. Use listeners instead. hostname of broker. If this is set, it will only bind to t..nullnullnull
portDEPRECATED: only used when listeners is not set. Use listeners instead. the port to listen and accept connections on6667666790929092909290929092909290929092909290929092909290929092909290929092
quota.consumer.defaultDEPRECATED: Used only when dynamic default quotas are not configured for or in Zookeeper. Any consumer distinguished by clientId..9223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
quota.producer.defaultDEPRECATED: Used only when dynamic default quotas are not configured for , or in Zookeeper. Any producer distinguished by client..9223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
controller.message.queue.sizeThe buffer size for controller-to-broker-channels1010Int.MaxValue
log.delete.delay.msThe period of time we hold log files around after they are removed from the in-memory segment index. This period of time allows an..6000060000
log.retention.{ms,minutes,hours}The amount of time to keep a log segment before it is deleted, i.e. the default data retention window for all topics. Note that if..7 days
log.roll.jitter.{ms,hours}The maximum jitter to subtract from logRollTimeMillis.0
log.roll.{ms,hours}This setting will force Kafka to roll a new log segment even if the log.segment.bytes size has not been reached. This setting can ..24 * 7 hours
offsets.topic.retention.minutesOffsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the of..1440
replica.lag.max.messagesIf a replica falls more than this many messages behind the leader, the leader will remove the follower from ISR and treat it as de..400040004000
log.retention.{minutes,hours}The amount of time to keep a log segment before it is deleted, i.e. the default data retention window for all topics. Note that if..7 days
log.flush.interval.ms.per.topicThe per-topic override for log.flush.interval.messages, e.g., topic1:3000,topic2:6000
log.retention.bytes.per.topicA per-topic override for log.retention.bytes.
log.retention.hours.per.topicA per-topic override for log.retention.hours.
log.roll.hours.per.topicThis setting allows overriding log.roll.hours on a per-topic basis.
log.segment.bytes.per.topicThis setting allows overriding log.segment.bytes on a per-topic basis.
brokeridEach broker is uniquely identified by an id. This id serves as the brokers "name", and allows the broker to be moved to a differen..none
enable.zookeeperenable zookeeper registration in the servertrue
log.cleanup.interval.minsControls how often the log cleaner checks logs eligible for deletion. A log file is eligible for deletion if it hasn't been modifi..10
log.default.flush.interval.msControls the maximum time that a message in any topic is kept in memory before flushed to disk. The value only makes sense if it's..log.default.flush.scheduler.interval.ms
log.default.flush.scheduler.interval.msControls the interval at which logs are checked to see if they need to be flushed to disk. A background thread will run at a frequ..3000
log.file.sizeControls the maximum size of a single log file.1*1024*1024*1024
log.flush.intervalControls the number of messages accumulated in each topic (partition) before the data is flushed to disk and made available to con..500
log.retention.sizethe maximum size of the log before deleting it. This controls how large a log is allowed to grow-1
max.socket.request.bytesthe maximum number of bytes in a socket request104857600
monitoring.period.secsthe interval in which to measure performance statistics600
num.threadsControls the number of worker threads in the broker to serve requests.Runtime.getRuntime().availableProcessors
socket.receive.bufferthe SO_RCVBUFF buffer of the socket sever sockets102400
socket.send.bufferthe SO_SNDBUFF buffer of the socket sever sockets102400
topic.flush.intervals.msPer-topic overrides for log.default.flush.interval.ms. Controls the maximum time that a message in selected topics is kept in memo..none
topic.log.retention.hoursTopic-specific retention time that overrides log.retention.hours, e.g., topic1:10,topic2:20none
topic.partition.count.mapOverride parameter to control the number of partitions for selected topics. E.g., topic1:10,topic2:20none
zk.connectFor using the zookeeper based automatic broker discovery, use this config to pass in the zookeeper connection url to the zookeeper..localhost:2182/kafka
zk.connectiontimeout.msSpecifies the max time that the client waits to establish a connection to zookeeper.6000
zk.sessiontimeout.msThe zookeeper session timeout.6000
zk.synctime.msMax time for how far a ZK follower can be behind a ZK leader2000
consumerDescription0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
allow.auto.create.topicsAllow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automat..truetruetruetruetruetruetruetruetrue
auto.commit.interval.msThe frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commit is set to true.60 * 100060 * 100060 * 100060 * 100060 * 1000500050005000500050005000500050005000500050005000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)5000 (5 seconds)
auto.offset.resetWhat to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because t..largestlargestlargestlargestlargestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatestlatest
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..nullnullnullnullnullnull
check.crcsAutomatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred...truetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
client.dns.lookupControls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..defaultdefaultdefaultdefaultdefaultuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ips
client.idAn ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '-StreamThread--'.group id valuegroup id valuegroup id valuegroup id valuegroup id value
client.rackA rack identifier for this client. This can be any string value which indicates where this client is physically located. It corres..
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.540000540000540000540000540000540000540000540000540000540000540000540000540000540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)
default.api.timeout.msSpecifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operatio..60000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
enable.auto.commitIf true the consumer's offset will be periodically committed in the background.truetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
exclude.internal.topicsWhether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitl..truetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetruetrue
fetch.max.bytesThe maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if th..524288005242880052428800524288005242880052428800524288005242880052428800524288005242880052428800 (50 mebibytes)52428800 (50 mebibytes)52428800 (50 mebibytes)52428800 (50 mebibytes)52428800 (50 mebibytes)52428800 (50 mebibytes)
fetch.max.wait.msThe maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately ..500500500500500500500500500500500500500500500500500500500
fetch.min.bytesThe minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait f..1111111111111111111111
group.idA unique string that identifies the Connect cluster group this worker belongs to.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
group.instance.idA unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer ..nullnullnullnullnullnullnullnullnull
heartbeat.interval.msThe expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used ..30003000300030003000300030003000300030003000300030003000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)3000 (3 seconds)
interceptor.classesA list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..nullnullnullnullnull
isolation.levelControls how to read messages written transactionally. If set to read_committed, consumer.poll() will only return transactional me..read_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommittedread_uncommitted
key.deserializerDeserializer class for key that implements the org.apache.kafka.common.serialization.Deserializer interface.nullnullnullnullnullnullnullnullnullnullnull
max.partition.fetch.bytesThe maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first reco..10485761048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)1048576 (1 mebibyte)
max.poll.interval.msThe maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of ..300000300000300000300000300000300000300000300000300000300000300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
max.poll.recordsThe maximum number of records returned in a single call to poll(). Note, that max.poll.records does not impact the underlying fetc..2147483647500500500500500500500500500500500500500500500500500
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000300000300000300000300000300000300000300000300000300000300000300000300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[][][]
metrics.num.samplesThe number of samples maintained to compute metrics.2222222222222222222
metrics.recording.levelThe highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.msThe window of time a metrics sample is computed over.3000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
partition.assignment.strategyA list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use..rangerangerange[class org.apache.kafka.clients.consumer.RangeAssignor]class org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignorclass org.apache.kafka.clients.consumer.RangeAssignor,class org.apache.kafka.clients.consumer.CooperativeStickyAssignor
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.3276865536655366553665536655366553665536655366553665536655366553665536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)65536 (64 kibibytes)
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000100010001000100010001000100010001000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50505050505050505050505050505050505050
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..400004000030500030500030500030500030500030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100100100100100100100100100100100100100100100100100100100
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmdKerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnull
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnull
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..606060606060606060606060
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000 (10 seconds)10000 (10 seconds)
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100100
sasl.mechanismSASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.3030
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnull
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnull
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000 (1 hour)3600000 (1 hour)
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000 (10 seconds)10000 (10 seconds)
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100100
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnull
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnull
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnull
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072131072131072131072131072131072131072131072131072131072131072131072131072131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)
session.timeout.msThe timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no hea..3000030000100001000010000100001000010000100001000010000100001000010000 (10 seconds)10000 (10 seconds)10000 (10 seconds)45000 (45 seconds)45000 (45 seconds)45000 (45 seconds)
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..127000 (127 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..[TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1]TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2,TLSv1.3TLSv1.2
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.nullnullnullnullnullnullnullhttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnull
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'. This is required for clients..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnull
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnull
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.typeThe file format of the key store file. This is optional for client.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.3TLSv1.2
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnull
ssl.truststore.locationThe location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.typeThe file format of the trust store file.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
value.deserializerDeserializer class for value that implements the org.apache.kafka.common.serialization.Deserializer interface.nullnullnullnullnullnullnullnullnullnullnull
auto.commit.enableIf true, periodically commit to ZooKeeper the offset of messages already fetched by the consumer. This committed offset will be us..truetruetruetruetruetruetruetruetruetruetrue
consumer.idGenerated automatically if not set.nullnullnullnullnullnullnullnullnullnullnull
consumer.timeout.msThrow a timeout exception to the consumer if no message is available for consumption after the specified interval-1-1-1-1-1-1-1-1-1-1-1-1
dual.commit.enabledIf you are using "kafka" as offsets.storage, you can dual commit offsets to ZooKeeper (in addition to Kafka). This is required dur..truetruetruetruetruetruetruetruetrue
fetch.message.max.bytesThe number of bytes of messages to attempt to fetch for each topic-partition in each fetch request. These bytes will be read into ..1024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 10241024 * 1024
fetch.wait.max.msThe maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately ..100100100100100100100100100100100
num.consumer.fetchersThe number fetcher threads used to fetch data.111111111
offsets.channel.backoff.msThe backoff period when reconnecting the offsets channel or retrying failed offset fetch/commit requests.100010001000100010001000100010001000
offsets.channel.socket.timeout.msSocket timeout when reading responses for offset fetch/commit requests. This timeout is also used for ConsumerMetadata requests th..100001000010000100001000010000100001000010000
offsets.commit.max.retriesRetry the offset commit up to this many times on failure. This retry count only applies to offset commits during shut-down. It doe..555555555
offsets.storageSelect where offsets should be stored (zookeeper or kafka).zookeeperzookeeperzookeeperzookeeperzookeeperzookeeperzookeeperzookeeperzookeeper
queued.max.message.chunksMax number of message chunks buffered for consumption. Each chunk can be up to fetch.message.max.bytes.1010222222222
rebalance.backoff.msBackoff time between retries during rebalance. If not set explicitly, the value in zookeeper.sync.time.ms is used.20002000200020002000200020002000200020002000
rebalance.max.retriesWhen a new consumer joins a consumer group the set of consumers attempt to "rebalance" the load to assign partitions to each consu..44444444444
refresh.leader.backoff.msBackoff time to wait before trying to determine the leader of a partition that has just lost its leader.200200200200200200200200200200200
socket.receive.buffer.bytesThe SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.64 * 102464 * 102464 * 102464 * 102464 * 102464 * 102464 * 102464 * 102464 * 102464 * 102464 * 1024
socket.timeout.msThe socket timeout for network requests. The actual timeout set will be fetch.wait.max.ms + socket.timeout.ms.3000030 * 100030 * 100030 * 100030 * 100030 * 100030 * 100030 * 100030 * 100030 * 100030 * 100030 * 1000
zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..nullnullnullnullnullnullnullnullnullnullnull
zookeeper.connection.timeout.msThe max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms i..60006000600060006000600060006000600060006000
zookeeper.session.timeout.msZookeeper session timeout60006000600060006000600060006000600060006000
zookeeper.sync.time.msHow far a ZK follower can be behind a ZK leader20002000200020002000200020002000200020002000
autocommit.enableif set to true, the consumer periodically commits to zookeeper the latest consumed offset of each partition.true
autocommit.interval.msis the frequency that the consumed offsets are committed to zookeeper.10000
autooffset.resetsmallest: automatically reset the offset to the smallest offset available on the broker. largest : automatically reset the offset..smallest
backoff.increment.msThis parameter avoids repeatedly polling a broker node which has no new data. We will backoff every time we get an empty set from ..1000
fetch.sizecontrols the number of bytes of messages to attempt to fetch in one request to the Kafka server300 * 1024
groupidis a string that uniquely identifies a set of consumers within the same consumer group.groupid
mirror.consumer.numthreadsThe number of threads to be used per topic for the mirroring consumer, by default4
mirror.topics.blacklistTopics to skip mirroring. At most one of whitelist/blacklist may be specified
mirror.topics.whitelistWhitelist of topics for this mirror's embedded consumer to consume. At most one of whitelist/blacklist may be specified.
queuedchunks.maxthe high level consumer buffers the messages fetched from the server internally in blocking queues. This parameter controls the si..100
rebalance.retries.maxmax number of retries during rebalance4
socket.buffersizecontrols the socket receive buffer for network requests64*1024
producerDescription0.70.8.00.8.10.8.20.9.00.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
acksThe number of acknowledgments the producer requires the leader to have received before considering a request complete. This contro..111111111111111111allallall
batch.sizeThe producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same parti..200163841638416384163841638416384163841638416384163841638416384163841638416384163841638416384163841638416384
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..nullnullnullnullnullnullnullnull
buffer.memoryThe total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than..335544323355443233554432335544323355443233554432335544323355443233554432335544323355443233554432335544323355443233554432335544323355443233554432335544323355443233554432
client.dns.lookupControls how the client uses DNS lookups. If set to use_all_dns_ips, connect to each returned IP address in sequence until a succe..defaultdefaultdefaultdefaultdefaultuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ipsuse_all_dns_ips
client.idAn ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '-StreamThread--'.
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..nonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenonenone
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.540000540000540000540000540000540000540000540000540000540000540000540000540000540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)
delivery.timeout.msAn upper bound on the time to report success or failure after a call to send() returns. This limits the total time that a record w..120000120000120000120000120000120000 (2 minutes)120000 (2 minutes)120000 (2 minutes)120000 (2 minutes)120000 (2 minutes)120000 (2 minutes)
enable.idempotenceWhen set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer ..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsetruetruetrue
interceptor.classesA list of classes to use as interceptors. Implementing the org.apache.kafka.clients.producer.ProducerInterceptor interface allows ..nullnullnullnullnull
key.serializerSerializer class for key that implements the org.apache.kafka.common.serialization.Serializer interface.nullnullnullnullnullnullnullnullnullnullnull
linger.msThe producer groups together any records that arrive in between request transmissions into a single batched request. Normally this..000000000000000000000
max.block.msThe configuration controls how long the KafkaProducer's send(), partitionsFor(), initTransactions(), sendOffsetsToTransaction(), c..6000060000600006000060000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
max.in.flight.requests.per.connectionThe maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this confi..5555555555555555555
max.request.sizeThe maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single re..104857610485761048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576104857610485761048576
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000300000300000300000300000300000300000300000300000300000300000300000300000300000300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
metadata.max.idle.msControls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to..300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[][][][][]
metrics.num.samplesThe number of samples maintained to compute metrics.222222222222222222222
metrics.recording.levelThe highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.msThe window of time a metrics sample is computed over.30000300003000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
partitioner.classA class to use to determine which partition to be send to when produce the records. Available options are:kafka.producer.DefaultPartitioner<T> - uses the partitioning strategy hash(key)%num_partitions. If key is null, then it picks a random partition.kafka.producer.DefaultPartitionerkafka.producer.DefaultPartitionerkafka.producer.DefaultPartitionerclass org.apache.kafka.clients.producer.internals.DefaultPartitionerclass org.apache.kafka.clients.producer.internals.DefaultPartitionerclass org.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitionerorg.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.32768327683276832768327683276832768327683276832768327683276832768327683276832768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000100010001000100010001000100010001000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..101050505050505050505050505050505050505050
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..1000010000100003000030000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
retriesSetting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..000000000021474836472147483647214748364721474836472147483647214748364721474836472147483647214748364721474836472147483647
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100100100100100100100100100100100100100100100100100100100100100100
sasl.client.callback.handler.classThe fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.nullnullnullnullnullnullnullnullnullnullnullnull
sasl.jaas.configJAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format ..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.kinit.cmdKerberos kinit command path./usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit/usr/bin/kinit
sasl.kerberos.min.time.before.reloginLogin thread sleep time between refresh attempts.60000600006000060000600006000060000600006000060000600006000060000600006000060000600006000060000
sasl.kerberos.service.nameThe Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
sasl.kerberos.ticket.renew.jitterPercentage of random jitter added to the renewal time.0.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.050.05
sasl.kerberos.ticket.renew.window.factorLogin thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which ..0.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.callback.handler.classThe fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For bro..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.classThe fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener ..nullnullnullnullnullnullnullnullnullnullnullnull
sasl.login.connect.timeout.msThe (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHB..nullnull
sasl.login.read.timeout.msThe (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER.nullnull
sasl.login.refresh.buffer.secondsThe amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would ot..300300300300300300300300300300300300
sasl.login.refresh.min.period.secondsThe desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between..606060606060606060606060
sasl.login.refresh.window.factorLogin refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which..0.80.80.80.80.80.80.80.80.80.80.80.8
sasl.login.refresh.window.jitterThe maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. ..0.050.050.050.050.050.050.050.050.050.050.050.05
sasl.login.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login us..10000 (10 seconds)10000 (10 seconds)
sasl.login.retry.backoff.msThe (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login us..100100
sasl.mechanismSASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the de..GSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPIGSSAPI
sasl.oauthbearer.clock.skew.secondsThe (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker.3030
sasl.oauthbearer.expected.audienceThe (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. ..nullnull
sasl.oauthbearer.expected.issuerThe (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected ..nullnull
sasl.oauthbearer.jwks.endpoint.refresh.msThe (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the..3600000 (1 hour)3600000 (1 hour)
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.msThe (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the extern..10000 (10 seconds)10000 (10 seconds)
sasl.oauthbearer.jwks.endpoint.retry.backoff.msThe (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external aut..100100
sasl.oauthbearer.jwks.endpoint.urlThe OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or fi..nullnull
sasl.oauthbearer.scope.claim.nameThe OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scop..scopescope
sasl.oauthbearer.sub.claim.nameThe OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subj..subsub
sasl.oauthbearer.token.endpoint.urlThe URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests..nullnull
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
security.providersA list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement ..nullnullnullnullnullnullnullnull
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.100 * 1024100 * 1024100 * 1024131072131072131072131072131072131072131072131072131072131072131072131072131072131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)
socket.connection.setup.timeout.max.msThe maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will inc..127000 (127 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
socket.connection.setup.timeout.msThe amount of time the client will wait for the socket connection to be established. If the connection is not built before the tim..10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)10000 (10 seconds)
ssl.cipher.suitesA list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotia..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.enabled.protocolsThe list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' ..[TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1][TLSv1.2, TLSv1.1, TLSv1]TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2,TLSv1.1,TLSv1TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2,TLSv1.3TLSv1.2
ssl.endpoint.identification.algorithmThe endpoint identification algorithm to validate server hostname using server certificate.nullnullnullnullnullnullnullhttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttpshttps
ssl.engine.factory.classThe class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache..nullnullnullnullnullnull
ssl.key.passwordThe password of the private key in the key store file or the PEM key specified in `ssl.keystore.key'. This is required for clients..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keymanager.algorithmThe algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for t..SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509SunX509
ssl.keystore.certificate.chainCertificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list ..nullnullnullnullnull
ssl.keystore.keyPrivate key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. ..nullnullnullnullnull
ssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.passwordThe store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. K..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.keystore.typeThe file format of the key store file. This is optional for client.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
ssl.protocolThe SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise..TLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSTLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.2TLSv1.3TLSv1.2
ssl.providerThe name of the security provider used for SSL connections. Default value is the default security provider of the JVM.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.secure.random.implementationThe SecureRandom PRNG implementation to use for SSL cryptography operations.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.trustmanager.algorithmThe algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured f..PKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIXPKIX
ssl.truststore.certificatesTrusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X...nullnullnullnullnull
ssl.truststore.locationThe location of the trust store file.nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.passwordThe password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity che..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
ssl.truststore.typeThe file format of the trust store file.JKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKSJKS
transaction.timeout.msThe maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer befo..60000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
transactional.idThe TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions si..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
value.serializerSerializer class for value that implements the org.apache.kafka.common.serialization.Serializer interface.nullnullnullnullnullnullnullnullnullnullnull
block.on.buffer.fullWhen our memory buffer is exhausted we must either stop accepting new records (block) or throw errors. By default this setting is ..truetruefalsefalsefalsefalse
metadata.fetch.timeout.msThe first time data is sent to a topic we must fetch metadata about that topic to know which servers host the topic's partitions. ..600006000060000600006000060000
timeout.msThe configuration controls the maximum amount of time the server will wait for acknowledgments from followers to meet the acknowle..300003000030000300003000030000
batch.num.messagesThe number of messages to send in one batch when using async mode. The producer will wait until either this number of messages are..200200200
compressed.topicsThis parameter allows you to set whether compression should be turned on for particular topics. If the compression codec is anythi..nullnullnullnull
compression.codecThis parameter allows you to specify the compression codec for all data generated by this producer. Valid values are "none", "gzip..0 (No compression)nonenonenone
key.serializer.classThe serializer class for keys (defaults to the same as for messages if nothing is given).nullnullnull
message.send.max.retriesThis property will cause the producer to automatically retry a failed send request. This property specifies the number of retries ..333
metadata.broker.listThis is for bootstrapping and the producer will only use it for getting metadata (topics, partitions and replicas). The socket con..nullnullnull
producer.typeThis parameter specifies whether the messages are sent asynchronously in a background thread. Valid values are (1) async for async..syncsyncsyncsync
queue.buffering.max.messagesThe maximum number of unsent messages that can be queued up the producer when using async mode before either the producer must be ..100001000010000
queue.buffering.max.msMaximum time to buffer data when using async mode. For example a setting of 100 will try to batch together 100ms of messages to se..500050005000
queue.enqueue.timeout.msThe amount of time to block before dropping messages when running in async mode and the buffer has reached queue.buffering.max.mes..-1-1-1
request.required.acksThis value controls when a produce request is considered completed. Specifically, how many other brokers must have committed the d..000
serializer.classThe serializer class for messages. The default encoder takes a byte[] and returns the same byte[].kafka.serializer.DefaultEncoder. This is a no-op encoder. The serialization of data to Message should be handled outside the Producerkafka.serializer.DefaultEncoderkafka.serializer.DefaultEncoderkafka.serializer.DefaultEncoder
topic.metadata.refresh.interval.msThe producer generally refreshes the topic metadata from brokers when there is a failure (partition missing, leader not available...600 * 1000600 * 1000600 * 1000
broker.listFor bypassing zookeeper based auto partition discovery, use this config to pass in static broker and per-broker partition informat..null. Either this parameter or zk.connect needs to be specified by the user.
buffer.sizethe socket buffer size, in bytes102400
callback.handlerthe class that implements kafka.producer.async.CallbackHandler<T> used to inject callbacks at various stages of the kafka.producer..null
callback.handler.propsthe java.util.Properties() object used to initialize the custom callback.handler through its init() APInull
connect.timeout.msthe maximum time spent by kafka.producer.SyncProducer trying to connect to the kafka broker. Once it elapses, the producer throws ..5000
event.handlerthe class that implements kafka.producer.async.IEventHandler<T> used to dispatch a batch of produce requests, using an instance of..kafka.producer.async.EventHandler<T>
event.handler.propsthe java.util.Properties() object used to initialize the custom event.handler through its init() APInull
max.message.sizethe maximum number of bytes that the kafka.producer.SyncProducer can send as a single message payload1000000
queue.sizethe maximum size of the blocking queue for buffering on the kafka.producer.AsyncProducer10000
queue.timemaximum time, in milliseconds, for buffering data on the producer queue. After it elapses, the buffered data in the producer queue..5000
reconnect.intervalthe number of produce requests after which kafka.producer.SyncProducer tears down the socket connection to the broker and establis..30000
reconnect.time.interval.msthe amount of time after which kafka.producer.SyncProducer tears down the socket connection to the broker and establishes it again..10 * 1000 * 1000
socket.timeout.msThe socket timeout for network requests. The actual timeout set will be fetch.wait.max.ms + socket.timeout.ms.30000
zk.connectFor using the zookeeper based automatic broker discovery, use this config to pass in the zookeeper connection url to the zookeeper..null. Either this parameter or broker.partition.info needs to be specified by the user
zk.read.num.retriesThe producer using the zookeeper software load balancer maintains a ZK cache that gets updated by the zookeeper watcher listeners...3
topicDescription0.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
cleanup.policyA string that is either "delete" or "compact" or both. This string designates the retention policy to use on old log segments. The..deletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedeletedelete
compression.typeSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy'..producerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducerproducer
delete.retention.msThe amount of time to retain delete tombstone markers for log compacted topics. This setting also gives a bound on the time in whi..86400000864000008640000086400000864000008640000086400000864000008640000086400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)
file.delete.delay.msThe time to wait before deleting a file from the filesystem60000600006000060000600006000060000600006000060000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)60000 (1 minute)
flush.messagesThis setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set..922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
flush.msThis setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was..922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
follower.replication.throttled.replicasA list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas ..
index.interval.bytesThis setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a me..4096409640964096409640964096409640964096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)4096 (4 kibibytes)
leader.replication.throttled.replicasA list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in..
max.compaction.lag.msThe maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
max.message.bytesThe largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are c..100001210000121000012100001210000121000012100001210000121048588104858810485881048588104858810485881048588
message.downconversion.enableThis configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false, ..truetruetruetruetruetruetruetruetruetruetruetrue
message.format.version[DEPRECATED] Specify the message format version the broker will use to append messages to the logs. The value of this config is al..0.11.0-IV21.0-IV01.1-IV02.0-IV12.1-IV22.2-IV12.3-IV12.4-IV12.5-IV02.6-IV02.7-IV22.8-IV13.0-IV13.0-IV13.0-IV1
message.timestamp.difference.max.msThe maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. ..922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807922337203685477580792233720368547758079223372036854775807
message.timestamp.typeDefine whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or ..CreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTimeCreateTime
min.cleanable.dirty.ratioThis configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled). B..0.50.50.50.50.50.50.50.50.50.50.50.50.50.50.5
min.compaction.lag.msThe minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.000000000000000
min.insync.replicasWhen a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a ..111111111111111
preallocateTrue if we should preallocate the file on disk when creating a new log segment.falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
retention.bytesThis configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old l..-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
retention.msThis configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we a..604800000604800000604800000604800000604800000604800000604800000604800000604800000604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)
segment.bytesThis configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger ..1073741824107374182410737418241073741824107374182410737418241073741824107374182410737418241073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)1073741824 (1 gibibyte)
segment.index.bytesThis configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink i..10485760104857601048576010485760104857601048576010485760104857601048576010485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)10485760 (10 mebibytes)
segment.jitter.msThe maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling000000000000000
segment.msThis configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to..604800000604800000604800000604800000604800000604800000604800000604800000604800000604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)604800000 (7 days)
unclean.leader.election.enableIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result ..falsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalsefalse
local.retention.bytesThe maximum size of local log segments that can grow for a partition before it deletes the old segments. Default value is -2, it r..-2
local.retention.msThe number of milli seconds to keep the local log segment before it gets deleted. Default value is -2, it represents `retention.ms..-2
remote.storage.enableTo enable tier storage for a topic, set `remote.storage.enable` as true. You can not disable this config once it is enabled. It wi..false
streamDescription0.10.00.10.10.10.20.11.01.01.12.02.12.22.32.42.52.62.72.83.03.13.2
acceptable.recovery.lagThe maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up enough to receive an active tas..100001000010000100001000010000
application.idAn identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-..nullnullnullnullnullnullnullnullnullnull
application.serverA host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this Ka..
bootstrap.serversA list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all ser..nullnullnullnullnullnullnullnullnullnull
buffered.records.per.partitionMaximum number of records to buffer per partition.100010001000100010001000100010001000100010001000100010001000100010001000
built.in.metrics.versionVersion of the built-in metrics to use.latestlatestlatestlatestlatestlatestlatest
cache.max.bytes.bufferingMaximum number of memory bytes to be used for buffering across all threads1048576010485760104857601048576010485760104857601048576010485760104857601048576010485760104857601048576010485760104857601048576010485760
client.idAn ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '-StreamThread--'.
commit.interval.msThe frequency in milliseconds with which to save the position of the processor. (Note, if processing.guarantee is set to exactly_o..30000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
connections.max.idle.msClose idle connections after the number of milliseconds specified by this config.540000540000540000540000540000540000540000540000540000540000540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)540000 (9 minutes)
default.deserialization.exception.handlerException handling class that implements the org.apache.kafka.streams.errors.DeserializationExceptionHandler interface.org.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandlerorg.apache.kafka.streams.errors.LogAndFailExceptionHandler
default.dsl.storeThe default state store type used by DSL operators.rocksDB
default.key.serdeDefault serializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. Note wh..org.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdenullnullnull
default.list.key.serde.innerDefault inner class of list serde for key that implements the org.apache.kafka.common.serialization.Serde interface. This configur..nullnullnull
default.list.key.serde.typeDefault class for key that implements the java.util.List interface. This configuration will be read if and only if default.key.ser..nullnullnull
default.list.value.serde.innerDefault inner class of list serde for value that implements the org.apache.kafka.common.serialization.Serde interface. This config..nullnullnull
default.list.value.serde.typeDefault class for value that implements the java.util.List interface. This configuration will be read if and only if default.value..nullnullnull
default.production.exception.handlerException handling class that implements the org.apache.kafka.streams.errors.ProductionExceptionHandler interface.org.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandlerorg.apache.kafka.streams.errors.DefaultProductionExceptionHandler
default.timestamp.extractorDefault timestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface.org.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamporg.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serdeDefault serializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. Note ..org.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdenullnullnull
max.task.idle.msThis config controls whether joins and merges may produce out-of-order results. The config value is the maximum amount of time in ..00000000000
max.warmup.replicasThe maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the pur..222222
metadata.max.age.msThe period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership cha..300000300000300000300000300000300000300000300000300000300000300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
metric.reportersA list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows p..[][]
metrics.num.samplesThe number of samples maintained to compute metrics.222222222222222222
metrics.recording.levelThe highest recording level for metrics.INFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFOINFO
metrics.sample.window.msThe window of time a metrics sample is computed over.30000300003000030000300003000030000300003000030000300003000030000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)30000 (30 seconds)
num.standby.replicasThe number of standby replicas for each task.000000000000000000
num.stream.threadsThe number of threads to execute stream processing.111111111111111111
poll.msThe amount of time in milliseconds to block waiting for input.100100100100100100100100100100100100100100100100100100
probing.rebalance.interval.msThe maximum time in milliseconds to wait before triggering a rebalance to probe for warmup replicas that have finished warming up ..600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)
processing.guaranteeThe processing guarantee that should be used. Possible values are at_least_once (default) and exactly_once_v2 (requires brokers ve..at_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_onceat_least_once
rack.aware.assignment.tagsList of client tag keys used to distribute standby replicas across Kafka Streams instances. When configured, Kafka Streams will ma..
receive.buffer.bytesThe size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.3276832768327683276832768327683276832768327683276832768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)32768 (32 kibibytes)
reconnect.backoff.max.msThe maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provide..1000100010001000100010001000100010001000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)1000 (1 second)
reconnect.backoff.msThe base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a t..50505050505050505050505050505050
repartition.purge.interval.msThe frequency in milliseconds with which to delete fully consumed records from repartition topics. Purging will occur after at lea..30000 (30 seconds)
replication.factorThe replication factor for change log topics and repartition topics created by the stream processing application. The default of -..111111111111111-1-1-1
request.timeout.msThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not r..4000040000400004000040000400004000040000400004000040000 (40 seconds)40000 (40 seconds)40000 (40 seconds)40000 (40 seconds)40000 (40 seconds)40000 (40 seconds)
retriesSetting a value greater than zero will cause the client to resend any request that fails with a potentially transient error. It is..0000000000000
retry.backoff.msThe amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending ..100100100100100100100100100100100100100100100100
rocksdb.config.setterA Rocks DB config setter class or class name that implements the org.apache.kafka.streams.state.RocksDBConfigSetter interfacenullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
security.protocolProtocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.PLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXTPLAINTEXT
send.buffer.bytesThe size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.131072131072131072131072131072131072131072131072131072131072131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)131072 (128 kibibytes)
state.cleanup.delay.msThe amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have n..600006000060000600000600000600000600000600000600000600000600000600000600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)600000 (10 minutes)
state.dirDirectory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem./tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/tmp/kafka-streams/var/folders/st/wn8xlbk16ml31qrqpyh28rlc0000gn/T//kafka-streams/var/folders/5w/m48dfpps5fj1byw1ldmq3v5w0000gp/T//kafka-streams/tmp/kafka-streams/var/folders/ds/dq10m26j2kjcypywn_lt0b0m0000gn/T//kafka-streams
task.timeout.msThe maximum amount of time in milliseconds a task might stall due to internal errors and retries until an error is raised. For a t..300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)300000 (5 minutes)
topology.optimizationA configuration telling Kafka Streams if it should optimize the topology, disabled by defaultnonenonenonenonenonenonenonenonenonenonenonenone
upgrade.fromAllows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2..nullnullnullnullnullnullnullnullnullnullnullnullnullnullnull
window.size.msSets window size for the deserializer in order to calculate window end times.nullnullnullnull
windowed.inner.class.serdeDefault serializer / deserializer for the inner class of a windowed record. Must implement the " + "org.apache.kafka.common..nullnullnull
windowstore.changelog.additional.retention.msAdded to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day864000008640000086400000864000008640000086400000864000008640000086400000864000008640000086400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)86400000 (1 day)
default.windowed.key.serde.innerDefault serializer / deserializer for the inner class of a windowed key. Must implement the org.apache.kafka.common.serialization...nullnullnull
default.windowed.value.serde.innerDefault serializer / deserializer for the inner class of a windowed value. Must implement the org.apache.kafka.common.serializatio..nullnullnull
partition.grouperPartition grouper class that implements the org.apache.kafka.streams.processor.PartitionGrouper interface. WARNING: This config is..class org.apache.kafka.streams.processor.DefaultPartitionGrouperclass org.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouperorg.apache.kafka.streams.processor.DefaultPartitionGrouper
key.serdeSerializer / deserializer class for key that implements the org.apache.kafka.common.serialization.Serde interface. This config is ..class org.apache.kafka.common.serialization.Serdes$ByteArraySerdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdenullnullnull
timestamp.extractorTimestamp extractor class that implements the org.apache.kafka.streams.processor.TimestampExtractor interface. This config is depr..class org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractorclass org.apache.kafka.streams.processor.ConsumerRecordTimestampExtractororg.apache.kafka.streams.processor.FailOnInvalidTimestampnullnullnull
value.serdeSerializer / deserializer class for value that implements the org.apache.kafka.common.serialization.Serde interface. This config i..class org.apache.kafka.common.serialization.Serdes$ByteArraySerdeclass org.apache.kafka.common.serialization.Serdes$ByteArraySerdeorg.apache.kafka.common.serialization.Serdes$ByteArraySerdenullnullnull
zookeeper.connectSpecifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper serve..