mirror of
https://github.com/LerianStudio/ring
synced 2026-04-21 13:37:27 +00:00
fix(dev-service-discovery): address PR review findings
Fix S3 upload double-extension by using basename of source path. Add collection prefix to index name derivation in Step 3 cross-reference. Clarify idx_*/uniq_* convention semantics based on real S3 data. Fix SKILL.md filename pattern to match reference spec. X-Lerian-Ref: 0x1
This commit is contained in:
parent
b6e29134bc
commit
0f6bf9e2e4
2 changed files with 9 additions and 6 deletions
|
|
@ -296,7 +296,7 @@ Summary of steps:
|
|||
1. **Detect in-code indexes** — scan `EnsureIndexes()` / `IndexModel{}` in MongoDB adapter files. Store keys as flat objects (e.g., `{"tenant_id": 1, "service_name": 1}`) — same format used by migration files
|
||||
2. **Detect existing migration files** — scan `scripts/mongodb/*.up.json` and `*.down.json` for existing per-index migration pairs
|
||||
3. **Cross-reference** — match in-code indexes against migration files (covered / missing_migration / migration_only)
|
||||
4. **Generate missing migration files** — for each missing index, create a `.up.json` and `.down.json` file pair. Each index is an atomic migration (one pair per index, NOT grouped by collection). Naming: `{NNNNNN}_{collection}_{index_name}.up.json` / `.down.json`. Optionally generate convenience `.js` scripts for manual `mongosh` execution (NOT uploaded to S3)
|
||||
4. **Generate missing migration files** — for each missing index, create a `.up.json` and `.down.json` file pair. Each index is an atomic migration (one pair per index, NOT grouped by collection). Naming: `{NNNNNN}_{index_name}.up.json` / `.down.json` (index_name already includes collection prefix, e.g., `idx_connection_org_config_name`). Optionally generate convenience `.js` scripts for manual `mongosh` execution (NOT uploaded to S3)
|
||||
5. **Upload to S3** — asks which bucket to use, then uploads `.up.json`/`.down.json` pairs following the migrations bucket convention: `s3://{bucket}/{service}/{module}/mongodb/`. The tenant-manager reads these files from S3 and applies them automatically — `.up.json` to create indexes when provisioning tenant databases, `.down.json` to drop indexes when rolling back or deprovisioning. Requires valid AWS credentials (verify with `aws sts get-caller-identity`). S3 upload failures are non-blocking — skill continues to Phase 4 with upload status reported in the HTML report
|
||||
|
||||
Store results for Phase 4 report:
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ Compare in-code indexes vs migration files.
|
|||
Both sources use the same flat object format for keys (e.g., {"tenant_id": 1, "service_name": 1}).
|
||||
|
||||
For each in-code index:
|
||||
- Compute index_name using naming convention (idx_{field}, idx_{f1}_{f2}, etc.)
|
||||
- Compute index_name using naming convention (idx_{collection}_{field}, idx_{collection}_{f1}_{f2}, etc.)
|
||||
- Find matching migration (same collection + same key fields in same order)
|
||||
- If found → status: "covered"
|
||||
- If NOT found → status: "missing_migration"
|
||||
|
|
@ -175,8 +175,10 @@ Each `.down.json` contains the drop instruction referencing the index name:
|
|||
### Index Name Conventions
|
||||
|
||||
Two naming prefixes are used:
|
||||
- `idx_*` — standard indexes (e.g., `idx_connection_org_config_name`)
|
||||
- `uniq_*` — unique constraint indexes (e.g., `uniq_job_org_hash_active`)
|
||||
- `idx_*` — general-purpose indexes, including unique indexes (e.g., `idx_connection_org_config_name` which has `"unique": true`)
|
||||
- `uniq_*` — used when the index's primary purpose is enforcing a uniqueness business constraint (e.g., `uniq_job_org_hash_active` prevents duplicate active jobs per org+hash)
|
||||
|
||||
Both prefixes can have `"unique": true` in options. The distinction is semantic: `uniq_*` signals that uniqueness IS the business rule, while `idx_*` with unique signals a query-performance index that happens to enforce uniqueness.
|
||||
|
||||
Rules:
|
||||
- Compound: concatenate field names with `_` (e.g., `idx_connection_org_product_config`)
|
||||
|
|
@ -475,11 +477,12 @@ Each module's MongoDB migration files go into `s3://{bucket}/{service}/{module}/
|
|||
|
||||
5. Upload each file pair to the correct module path (best-effort, continue on failure):
|
||||
- For each (file_pair, module):
|
||||
# Use the local filename directly — it already has the correct .up.json/.down.json extension
|
||||
aws s3 cp {up_json_path} \
|
||||
s3://{s3_bucket}/{service_name}/{module}/mongodb/{filename}.up.json \
|
||||
s3://{s3_bucket}/{service_name}/{module}/mongodb/$(basename {up_json_path}) \
|
||||
--content-type "application/json"
|
||||
aws s3 cp {down_json_path} \
|
||||
s3://{s3_bucket}/{service_name}/{module}/mongodb/{filename}.down.json \
|
||||
s3://{s3_bucket}/{service_name}/{module}/mongodb/$(basename {down_json_path}) \
|
||||
--content-type "application/json"
|
||||
- If a single upload fails, log the error and continue with remaining files
|
||||
- Track: successful_uploads = [], failed_uploads = []
|
||||
|
|
|
|||
Loading…
Reference in a new issue