Kafka
Connect your Apache Kafka clusters to enable Alex (Cloud Engineer) and Tony (Database Engineer) to monitor topic health, analyze consumer lag, and optimize streaming performance.Supported Platforms
| Platform | Support |
|---|---|
| Confluent Cloud | All tiers |
| Self-hosted Kafka | 2.8+ (KRaft mode), 3.x |
Setup
Select your Kafka platform for specific connection instructions:- Confluent Cloud
- Self-hosted Kafka
Open Confluent Cloud and pick your environment
Go to confluent.cloud/home, then open Environments.Click the environment you want to connect.The environment ID appears in the URL after you select it (for example,
env-xxxxx).Example navigation:- Environment list:
https://confluent.cloud/environments - Selected environment URL pattern:
https://confluent.cloud/environments/<env-id>/overview
Get Kafka cluster fields
Inside the selected environment, open Clusters and click your target cluster (for example,
<cluster-name>).Collect:BOOTSTRAP_SERVERSKAFKA_REST_ENDPOINTKAFKA_CLUSTER_ID
KAFKA_ENV_ID as the selected environment ID from Step 1.Create scoped API keys and secrets
Go to confluent.cloud/settings/api-keys and click + Add API Key.Choose Service Account for production workloads, or My Account for development/testing.Select the desired scope in Confluent onboarding, then save the generated API key and API secret pair.Scopes you may create keys for:
- Kafka cluster
- Schema Registry
- ksqlDB cluster
- Flink region
- Cloud resource management
- Tableflow
Get Schema Registry endpoint (optional)
In the selected environment, open Stream Governance -> Schema Registry.Collect:
SCHEMA_REGISTRY_ENDPOINT
https://confluent.cloud/environments/<env-id>/stream-governance/schema-registry/overviewGet Flink fields (optional)
In the selected environment, open Flink.Open Compute pools and create a pool with + Add compute pool if needed.Click the target compute pool and collect:
FLINK_COMPUTE_POOL_IDFLINK_ENV_ID(same environment ID from URL)
https://confluent.cloud/environments/<env-id>/flink/pools/<compute-pool-id>/overviewSet FLINK_REST_ENDPOINT from your cloud provider and region (AWS, Azure, or GCP; for example <region-code>).Get organization ID (optional)
Go to confluent.cloud/settings/organizations/edit and collect:
FLINK_ORG_ID
Add connection in CloudThinker
In CloudThinker, navigate to Connections → Kafka.Create a JSON file with the fields for the scopes you enabled (see Connection Field Template below). Upload this JSON file in the connection form.Required fields depend on your profile — see Profiles for details.You can leave optional scope fields empty and add them later.
Scope-Based Credential Model
Confluent Cloud uses scope-based API credentials. Each API key and secret pair grants access to a specific resource scope.You can start with Kafka-only fields, then add Schema Registry, Flink, Cloud API, or Tableflow fields later.| Scope | What It Unlocks | Typical Fields |
|---|---|---|
| Kafka cluster | Manage topics (list, create, delete, configure), produce/consume messages, view cluster metadata | BOOTSTRAP_SERVERS, KAFKA_API_KEY, KAFKA_API_SECRET, KAFKA_CLUSTER_ID, KAFKA_ENV_ID, KAFKA_REST_ENDPOINT |
| Schema Registry | List, inspect, and delete data schemas | SCHEMA_REGISTRY_ENDPOINT, SCHEMA_REGISTRY_API_KEY, SCHEMA_REGISTRY_API_SECRET |
| Flink region | Create and manage Flink SQL statements, explore catalogs/databases/tables, health checks and diagnostics | FLINK_REST_ENDPOINT, FLINK_API_KEY, FLINK_API_SECRET, FLINK_COMPUTE_POOL_ID, FLINK_ENV_ID |
| Cloud resource management | Discover environments and clusters, query operational metrics and billing costs | CONFLUENT_CLOUD_API_KEY, CONFLUENT_CLOUD_API_SECRET |
| Tableflow | Manage Tableflow-enabled topics and catalog integrations (e.g., AWS Glue) | TABLEFLOW_API_KEY, TABLEFLOW_API_SECRET |
| Organization metadata | Organization-level context for Flink resource management | FLINK_ORG_ID |
Profiles
Minimal (Kafka-only)
Required:BOOTSTRAP_SERVERSKAFKA_API_KEYKAFKA_API_SECRETKAFKA_CLUSTER_IDKAFKA_ENV_ID
Standard (Kafka + Schema Registry + Cloud Management)
Add:SCHEMA_REGISTRY_ENDPOINTSCHEMA_REGISTRY_API_KEYSCHEMA_REGISTRY_API_SECRETCONFLUENT_CLOUD_API_KEYCONFLUENT_CLOUD_API_SECRET
Advanced (Flink / Tableflow)
Add one or more optional scope groups as needed:- Flink:
FLINK_REST_ENDPOINT,FLINK_API_KEY,FLINK_API_SECRET,FLINK_COMPUTE_POOL_ID,FLINK_ENV_ID - Tableflow:
TABLEFLOW_API_KEY,TABLEFLOW_API_SECRET
Connection Field Template
Use this template and fill values for your enabled scopes:Agent Capabilities
Once connected, Alex and Tony can:| Capability | Description |
|---|---|
| Consumer Lag Monitoring | Track lag per consumer group, identify slow consumers |
| Topic Health Analysis | Check partition distribution, replication factor, under-replicated partitions |
| Throughput Metrics | Monitor bytes in/out, message rates per topic |
| Broker Health | Track broker availability, ISR (In-Sync Replicas) status |
Example Prompts
Troubleshooting
Connection refused or Timeout
Connection refused or Timeout
- Verify the Kafka broker process is running on
<broker-name>.<your-domain>. - Check that the broker port (default 9092) is open and not blocked by firewall.
- Verify the bootstrap server address
<broker-name>.<your-domain>:9092is correct and reachable from CloudThinker. - For local development, ensure Kafka is bound to an accessible IP (not just
127.0.0.1).
Confluent Cloud partial scope fields
Confluent Cloud partial scope fields
When using partial scope onboarding, remove the entire key-value pair for unused scopes. Do not leave empty strings.Correct (Kafka-only, Schema Registry removed entirely):Incorrect (empty string values cause validation errors):
Security Best Practices
For Confluent Cloud
- Network restrictions - Restrict Kafka access to CloudThinker IPs via security groups.
- Secure credentials - Store secrets in a secure manager and rotate keys regularly.
For Self-hosted Kafka
- Network restrictions - Restrict broker access to CloudThinker IPs via firewalls.
- Private networks - Keep brokers in private subnets, not exposed to the public internet.
CloudThinker supports partial scope onboarding. If you only provide Kafka scope fields first, you can still create the connection and add Schema Registry, Flink, Cloud API, or Tableflow credentials later.
Related
Alex Agent
Cloud infrastructure and streaming optimization agent
AWS Connection
Setup instructions for AWS cloud resources