hyperdx/docker-compose.ci.yml
Warren 523443eb7c
fix: aggregator should return 413 (Content Too Large) to make ingestor ARC work properly (#461)
500 errors will cause ingestors to slow down due to ARC (https://vector.dev/docs/reference/configuration/sinks/http/#request.adaptive_concurrency), which is good.
However, for case like 413 (due to aggregator payload size limit), ingestors shouldn't slow down and be blocked.
The fix here is to return the original status code so ARC will tune up the concurrency properly

Before:
Events buffer will be queued up and ingestors will be blocked (concurrency stays)

<img width="1170" alt="Screenshot 2024-07-08 at 4 46 10 PM" src="https://github.com/hyperdxio/hyperdx/assets/5959690/a149092a-b062-4508-b62b-a85116ac74b2">


Now:
Events buffer will be cleared out properly (concurrency increases) 

<img width="1237" alt="Screenshot 2024-07-08 at 4 43 22 PM" src="https://github.com/hyperdxio/hyperdx/assets/5959690/7871593b-0e2f-4a1e-960c-c27d90869514">

TODO: we want to revisit the error global handler to make sure the proper status code is returned
2024-07-09 00:59:41 +00:00

108 lines
2.9 KiB
YAML

version: '3'
services:
ingestor:
container_name: hdx-ci-ingestor
build:
context: ./docker/ingestor
target: dev
volumes:
- ./docker/ingestor:/app
ports:
- 28686:8686 # healthcheck
# - 8002:8002 # http-generic
environment:
AGGREGATOR_API_URL: 'http://aggregator:8001'
ENABLE_GO_PARSER: 'true'
GO_PARSER_API_URL: 'http://go-parser:7777'
RUST_BACKTRACE: full
VECTOR_LOG: ${HYPERDX_LOG_LEVEL}
VECTOR_OPENSSL_LEGACY_PROVIDER: 'false'
networks:
- internal
otel-collector:
container_name: hdx-ci-otel-collector
build:
context: ./docker/otel-collector
target: dev
environment:
HYPERDX_API_KEY: ${HYPERDX_API_KEY}
HYPERDX_LOG_LEVEL: ${HYPERDX_LOG_LEVEL}
INGESTOR_API_URL: 'http://ingestor:8002'
volumes:
- ./docker/otel-collector/config.yaml:/etc/otelcol-contrib/config.yaml
ports:
- '23133:13133' # health_check extension
# - '1888:1888' # pprof extension
# - '24225:24225' # fluentd receiver
# - '4317:4317' # OTLP gRPC receiver
# - '4318:4318' # OTLP http receiver
# - '55679:55679' # zpages extension
# - '8888:8888' # metrics extension
# - '9411:9411' # zipkin
networks:
- internal
ch_server:
container_name: hdx-ci-ch-server
image: clickhouse/clickhouse-server:23.8.8-alpine
environment:
# default settings
CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1
volumes:
- ./docker/clickhouse/local/config.xml:/etc/clickhouse-server/config.xml
- ./docker/clickhouse/local/users.xml:/etc/clickhouse-server/users.xml
restart: on-failure
# ports:
# - 8123:8123 # http api
# - 9000:9000 # native
networks:
- internal
db:
container_name: hdx-ci-db
image: mongo:5.0.14-focal
command: --port 29999
# ports:
# - 29999:29999
networks:
- internal
redis:
container_name: hdx-ci-redis
image: redis:7.0.11-alpine
# ports:
# - 6379:6379
networks:
- internal
api:
build:
context: .
dockerfile: ./packages/api/Dockerfile
target: dev
container_name: hdx-ci-api
image: hyperdx/ci/api
# ports:
# - 9000:9000
environment:
AGGREGATOR_PAYLOAD_SIZE_LIMIT: '64mb'
APP_TYPE: 'api'
CLICKHOUSE_HOST: http://ch_server:8123
CLICKHOUSE_PASSWORD: api
CLICKHOUSE_USER: api
EXPRESS_SESSION_SECRET: 'hyperdx is cool 👋'
FRONTEND_URL: http://localhost:9090 # need to be localhost (CORS)
MONGO_URI: 'mongodb://db:29999/hyperdx-test'
NODE_ENV: ci
PORT: 9000
REDIS_URL: redis://redis:6379
SERVER_URL: http://localhost:9000
volumes:
- ./packages/api/src:/app/src
networks:
- internal
depends_on:
- ch_server
- db
- ingestor
- otel-collector
- redis
networks:
internal:
name: 'hyperdx-ci-internal-network'