Compare commits

...

245 commits

Author SHA1 Message Date
Aditya Raj
6256abf182
fix(cli): uses DrySource revision for app diff/manifests with sourceHydrator (#23817) (#24670)
Some checks are pending
Integration tests / Run end-to-end tests (push) Blocked by required conditions
Integration tests / changes (push) Waiting to run
Integration tests / Ensure Go modules synchronicity (push) Blocked by required conditions
Integration tests / Build & cache Go code (push) Blocked by required conditions
Integration tests / Lint Go code (push) Blocked by required conditions
Integration tests / Run unit tests for Go packages (push) Blocked by required conditions
Integration tests / Run unit tests with -race for Go packages (push) Blocked by required conditions
Integration tests / Check changes to generated code (push) Blocked by required conditions
Integration tests / Build, test & lint UI code (push) Blocked by required conditions
Integration tests / shellcheck (push) Waiting to run
Integration tests / Process & analyze test artifacts (push) Blocked by required conditions
Integration tests / E2E Tests - Composite result (push) Blocked by required conditions
Code scanning - action / CodeQL-Build (push) Waiting to run
Image / set-vars (push) Waiting to run
Image / build-only (push) Blocked by required conditions
Image / build-and-publish (push) Blocked by required conditions
Image / build-and-publish-provenance (push) Blocked by required conditions
Image / Deploy (push) Blocked by required conditions
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
Signed-off-by: Aditya Raj <adityaraj10600@gmail.com>
2026-04-21 12:51:39 -04:00
dependabot[bot]
b01aa188fd
chore(deps): bump tj-actions/changed-files from 47.0.5 to 47.0.6 (#27470)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-21 12:12:31 -04:00
dependabot[bot]
a7853eb7b6
chore(deps): bump step-security/harden-runner from 2.18.0 to 2.19.0 (#27471)
Some checks are pending
Integration tests / changes (push) Waiting to run
Integration tests / Ensure Go modules synchronicity (push) Blocked by required conditions
Integration tests / Build & cache Go code (push) Blocked by required conditions
Integration tests / Lint Go code (push) Blocked by required conditions
Integration tests / Run unit tests for Go packages (push) Blocked by required conditions
Integration tests / Run unit tests with -race for Go packages (push) Blocked by required conditions
Integration tests / Check changes to generated code (push) Blocked by required conditions
Integration tests / Build, test & lint UI code (push) Blocked by required conditions
Integration tests / shellcheck (push) Waiting to run
Integration tests / Process & analyze test artifacts (push) Blocked by required conditions
Integration tests / Run end-to-end tests (push) Blocked by required conditions
Integration tests / E2E Tests - Composite result (push) Blocked by required conditions
Code scanning - action / CodeQL-Build (push) Waiting to run
Image / set-vars (push) Waiting to run
Image / build-only (push) Blocked by required conditions
Image / build-and-publish (push) Blocked by required conditions
Image / build-and-publish-provenance (push) Blocked by required conditions
Image / Deploy (push) Blocked by required conditions
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 21:41:33 -10:00
CPunia
e0e827dab0
fix: downgrade DiffFromCache log level for cache-miss errors (#26185)
Some checks are pending
Integration tests / changes (push) Waiting to run
Integration tests / Ensure Go modules synchronicity (push) Blocked by required conditions
Integration tests / Build & cache Go code (push) Blocked by required conditions
Integration tests / Lint Go code (push) Blocked by required conditions
Integration tests / Run unit tests for Go packages (push) Blocked by required conditions
Integration tests / Run unit tests with -race for Go packages (push) Blocked by required conditions
Integration tests / Check changes to generated code (push) Blocked by required conditions
Integration tests / Build, test & lint UI code (push) Blocked by required conditions
Integration tests / shellcheck (push) Waiting to run
Integration tests / Process & analyze test artifacts (push) Blocked by required conditions
Integration tests / Run end-to-end tests (push) Blocked by required conditions
Integration tests / E2E Tests - Composite result (push) Blocked by required conditions
Code scanning - action / CodeQL-Build (push) Waiting to run
Image / build-and-publish-provenance (push) Blocked by required conditions
Image / set-vars (push) Waiting to run
Image / build-only (push) Blocked by required conditions
Image / build-and-publish (push) Blocked by required conditions
Image / Deploy (push) Blocked by required conditions
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
Signed-off-by: CPunia <67651406+cp319391@users.noreply.github.com>
2026-04-20 19:15:35 -04:00
shiiyan
74d1fe0a13
feat(ui): use toggle-auto-sync resource action in app details page (#21564) (#27226)
Signed-off-by: SY <shiiyan79@gmail.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-20 22:17:34 +00:00
Peter Jiang
b74c08ec5c
fix: remove resourceVersion from ssd (#27406)
Signed-off-by: Peter Jiang <peterjiang823@gmail.com>
2026-04-20 13:27:41 -04:00
Alexandre Gaudreault
032d9e1e80
refactor: simplify UpdateRevisionForPaths logic and add early returns (#27190)
Some checks are pending
Integration tests / changes (push) Waiting to run
Integration tests / Ensure Go modules synchronicity (push) Blocked by required conditions
Integration tests / Build & cache Go code (push) Blocked by required conditions
Integration tests / Lint Go code (push) Blocked by required conditions
Integration tests / Run unit tests for Go packages (push) Blocked by required conditions
Integration tests / Run unit tests with -race for Go packages (push) Blocked by required conditions
Integration tests / Check changes to generated code (push) Blocked by required conditions
Integration tests / Build, test & lint UI code (push) Blocked by required conditions
Integration tests / shellcheck (push) Waiting to run
Integration tests / Process & analyze test artifacts (push) Blocked by required conditions
Integration tests / Run end-to-end tests (push) Blocked by required conditions
Integration tests / E2E Tests - Composite result (push) Blocked by required conditions
Code scanning - action / CodeQL-Build (push) Waiting to run
Image / set-vars (push) Waiting to run
Image / build-only (push) Blocked by required conditions
Image / build-and-publish (push) Blocked by required conditions
Image / build-and-publish-provenance (push) Blocked by required conditions
Image / Deploy (push) Blocked by required conditions
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-20 10:03:55 -04:00
Alex Recuenco
b37d389f62
chore: Lint change, Prevent class components from being created (#27420)
Signed-off-by: alexrecuenco <26118630+alexrecuenco@users.noreply.github.com>
Signed-off-by: Alex Recuenco <26118630+alexrecuenco@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-20 09:25:29 -04:00
dependabot[bot]
26f71b3159
chore(deps): bump actions/setup-node from 6.3.0 to 6.4.0 (#27452)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 08:27:30 -04:00
dependabot[bot]
99b10b5e29
chore(deps): bump renovatebot/github-action from 46.1.9 to 46.1.10 (#27453)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 08:26:58 -04:00
dependabot[bot]
f54cc0bc61
chore(deps): bump github.com/go-openapi/runtime from 0.29.3 to 0.29.4 (#27457)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 08:26:27 -04:00
Mangaal Meetei
5103112b9a
docs: Promote ApplicationSet in any namespace to stable (#27417)
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
2026-04-20 14:04:25 +03:00
dependabot[bot]
5a4a551478
chore(deps): bump the aws-sdk-v2 group with 6 updates (#27455)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 11:48:05 +02:00
Vilius Puskunalis
611fcb012c
feat: add sync overrun option to sync windows (#25361) (#25510)
Some checks are pending
Integration tests / Run end-to-end tests (push) Blocked by required conditions
Integration tests / E2E Tests - Composite result (push) Blocked by required conditions
Integration tests / changes (push) Waiting to run
Integration tests / Ensure Go modules synchronicity (push) Blocked by required conditions
Integration tests / Build & cache Go code (push) Blocked by required conditions
Integration tests / Lint Go code (push) Blocked by required conditions
Integration tests / Run unit tests for Go packages (push) Blocked by required conditions
Integration tests / Run unit tests with -race for Go packages (push) Blocked by required conditions
Integration tests / Check changes to generated code (push) Blocked by required conditions
Integration tests / Build, test & lint UI code (push) Blocked by required conditions
Integration tests / shellcheck (push) Waiting to run
Integration tests / Process & analyze test artifacts (push) Blocked by required conditions
Code scanning - action / CodeQL-Build (push) Waiting to run
Image / set-vars (push) Waiting to run
Image / build-only (push) Blocked by required conditions
Image / build-and-publish (push) Blocked by required conditions
Image / build-and-publish-provenance (push) Blocked by required conditions
Image / Deploy (push) Blocked by required conditions
Scorecards supply-chain security / Scorecards analysis (push) Waiting to run
Signed-off-by: Vilius Puškunalis <47086537+puskunalis@users.noreply.github.com>
2026-04-20 06:55:13 +00:00
dependabot[bot]
9c8ae9a294
chore(deps): bump github.com/dlclark/regexp2 from 1.11.5 to 1.12.0 (#27456)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 08:21:34 +02:00
dependabot[bot]
68505a81ed
chore(deps): bump goreleaser/goreleaser-action from 7.0.0 to 7.1.0 (#27454)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-20 08:20:54 +02:00
Pasha Kostohrys
3132b0de4f
chore: update Maintainers.md and move Pasha Kostohrys to octopus deploy org (#27450)
Co-authored-by: pasha <pasha.k@fyxt.com>
2026-04-19 20:33:04 -04:00
Michael Crenshaw
25b3037485
fix(ci): pnpm sbom generation (#27337) (#27339)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-04-19 07:53:06 -04:00
Jaewoo Choi
c3af4251d8
docs: add revert prefix to PR title documentation (#27439)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-04-19 07:51:48 -04:00
Kanika Rana
7f3ecfcf42
chore(ci): add revert to title checker config (#27424)
Signed-off-by: Kanika Rana <krana@redhat.com>
2026-04-19 10:09:11 +03:00
github-actions[bot]
8038e0ec96
[Bot] docs: Update Snyk report (#27438)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2026-04-19 09:41:10 +03:00
firas_mosbehi
6c1fd67558
test: add t.Parallel() to util jwt, crypto, and password tests (#27423) (#27432)
Signed-off-by: Firas Mosbehi <firas.mosbehi@insat.ucar.tn>
2026-04-18 18:11:38 -04:00
dependabot[bot]
d017512baa
chore(deps): bump github.com/moby/spdystream from 0.5.0 to 0.5.1 (#27401)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-18 16:13:36 +02:00
Prune Sebastien THOMAS
29fd8db39a
feat(appset): filtering repos by archived status #20736 (#21505)
Signed-off-by: Prune <prune@lecentre.net>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-17 17:32:18 +00:00
renovate[bot]
37e10dba75
chore(deps): update docker.io/library/registry:3.1 docker digest to 8a7c1aa (#27405)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-04-17 08:30:38 -04:00
dependabot[bot]
f3b803f284
chore(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.32.14 to 1.32.15 in the aws-sdk-v2 group (#27412)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-17 08:15:49 +02:00
Alexandre Gaudreault
4bc5d38634
docs: clarify selective sync and ApplyOutOfSyncOnly (#27393)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-16 22:40:35 -04:00
Tsung-Han Chang
4f47dd0afa
fix(rbac): resolve RBAC regression for project-scoped resources in multi-namespace architecture (#25289) (#26573)
Signed-off-by: tcfwbper <pesci861207@gmail.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-16 17:23:46 -04:00
Soumya Ghosh Dastidar
21615be541
fix: avoid stale informer cache in RevisionMetadata handler (#27392)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2026-04-16 12:34:21 -07:00
Soumya Ghosh Dastidar
8fbb72d1eb
fix: revert autosync event message format change (#27387)
Signed-off-by: Soumya Ghosh Dastidar <gdsoumya@gmail.com>
2026-04-16 17:47:23 +00:00
Alexandre Gaudreault
87d79f9392
fix(performance): add cache support for ResolveRevision to reduce Git operations (#27193)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-16 15:42:58 +00:00
Alexandre Gaudreault
4d2b6fa940
fix(hydrator): align dry source validation cache keys with hydrator (#27182)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-16 11:31:32 -04:00
Matthieu MOREL
dce3f6e8a5
chore: enable unnecessary-format rule from revive (#26958)
Signed-off-by: Matthieu MOREL <matthieu.morel35@gmail.com>
2026-04-16 11:21:56 -04:00
Allan Yung
9a19735918
feat: Support Azure Service Principal authentication for Azure DevOps repositories (#25324)
Signed-off-by: Allan Yung <allan.yung@bbdsoftware.com>
Co-authored-by: Dan Garfield <dan.garfield@octopus.com>
2026-04-16 11:16:47 -04:00
Alex Recuenco
6bf97ec1fd
refactor: Move NodeUpdateAnimation to functional from classes (#27382)
Signed-off-by: alexrecuenco <26118630+alexrecuenco@users.noreply.github.com>
2026-04-16 04:22:12 -10:00
dependabot[bot]
e6aa9059dd
chore(deps): bump sigs.k8s.io/structured-merge-diff/v6 from 6.3.2 to 6.4.0 (#27371)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 08:43:29 -04:00
dependabot[bot]
4f8f4d2e21
chore(deps): bump node from 20 to 24 (#23466)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-16 02:34:19 -10:00
Oliver Gondža
2ccc2ea466
chore(docs): Fix godoc in util/db/certificate.go (#27380)
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
2026-04-16 12:01:19 +00:00
argoproj-renovate[bot]
19219e06d2
chore(deps): update group node to v24 (major) (#25096)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-16 13:17:10 +02:00
Nitish Kumar
db7d672f05
feat(webhooks): add webhook support for GHCR (#26462)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-04-16 13:11:31 +02:00
Mangaal Meetei
04fa70c4a4
docs: Update the status of the feature, appset in any namespace, from beta to stable (#27353)
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
2026-04-16 13:38:09 +03:00
dependabot[bot]
3eb5104750
chore(deps): bump library/registry from afcd13f to b0f3668 in /test/container (#27374)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 07:56:30 +03:00
dependabot[bot]
1a195cc04f
chore(deps): bump library/ubuntu from cc925e5 to 5e27572 in /test/container (#27373)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 07:56:07 +03:00
dependabot[bot]
576002fb72
chore(deps): bump github/codeql-action from 4.35.1 to 4.35.2 (#27372)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 07:27:56 +03:00
dependabot[bot]
a216fdb8f4
chore(deps): bump github.com/aws/smithy-go from 1.24.3 to 1.25.0 (#27369)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 07:26:13 +03:00
dependabot[bot]
9cfce1df0e
chore(deps): bump step-security/harden-runner from 2.17.0 to 2.18.0 (#27370)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-16 07:25:11 +03:00
Karim Farid
30efe53bf2
fix(ui): OCI revision metadata never renders due to conflicting guard clause (#26948) (#27097)
Signed-off-by: Karim Zakzouk <karimzakzouk69@gmail.com>
Signed-off-by: Karim Farid <karimzakzouk69@gmail.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-15 22:51:21 +02:00
Atif Ali
706a0370c2
feat(ui): support creating multi-source applications in New App panel [CONTINUED..] (#27095)
Signed-off-by: Dave Canton <dvcanton7@gmail.com>
Signed-off-by: Atif Ali <atali@redhat.com>
Co-authored-by: Dave Canton <dvcanton7@gmail.com>
2026-04-15 10:47:43 -04:00
argoproj-renovate[bot]
ecc178f03e
chore(deps): update docker.io/library/golang:1.26.2 docker digest to 5f3787b (#27343)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-15 07:34:44 -04:00
dependabot[bot]
0a0cd0b687
chore(deps): bump library/golang from fcdb3e4 to 5f3787b in /test/container (#27347)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-15 07:30:00 -04:00
dependabot[bot]
ea3dae667e
chore(deps): bump library/golang from fcdb3e4 to 5f3787b in /test/remote (#27346)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-15 07:29:36 -04:00
Revital Barletz
bfc332d871
docs: clarify prune ordering for sync waves (#27352)
Signed-off-by: Revital Barletz <Revital.barletz@octopus.com>
2026-04-15 15:52:43 +05:30
Mirko Krause
67de02b1c4
docs: fix enumeration line breaks (#27333)
Signed-off-by: Mirko Krause <krause@codebase.one>
2026-04-15 12:17:36 +03:00
Sean Liao
a1af401f5f
docs: add Circle to USERS.md (#27349)
Signed-off-by: Sean Liao <sean.liao@circle.com>
2026-04-15 12:07:01 +03:00
Elton de Boer
3ce32a9880
fix(#25983): update theme default to auto (#25985)
Signed-off-by: Elton de Boer <elton@playgroundtech.io>
2026-04-14 20:31:35 -04:00
Jaewoo Choi
1bd0d48c82
fix(ui): add truncation and tooltip for long sync status branch names (#27260)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-04-14 17:22:49 -04:00
Nitish Kumar
6ba0727217
fix: improve error message when hydrateTo sync path does not exist yet (#27336)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-04-14 13:50:52 +00:00
dependabot[bot]
0c01fc895e
chore(deps): bump renovatebot/github-action from 46.1.8 to 46.1.9 (#27332)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-14 09:38:03 -04:00
renovate[bot]
7308ed98af
chore(deps): update actions/cache action to v5.0.5 (#27334)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-04-14 09:37:22 -04:00
dudinea
1a0f5d4ef2
docs: add AGENTS.md file to the repository (#27315) (#27316)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
Signed-off-by: dudinea <eugene.doudine@octopus.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-04-14 15:02:56 +03:00
dudinea
7445f7ed73
ci: harden-runner: whitelist get.helm.sh and registry.npmjs.org for renovate workflow (#27163) (#27328)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-04-14 07:54:10 +03:00
Michael Crenshaw
daadf868db
feat(health): additional promoter.argoproj.io health checks (#27170)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-04-13 17:09:21 -04:00
renovate[bot]
7accd34f64
chore(deps): update dependency eslint-config-prettier to v9.1.2 (#27323)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-04-13 09:38:20 -04:00
dependabot[bot]
0737418abb
chore(deps): bump library/golang from 2a2b4b5 to fcdb3e4 in /test/remote (#27307)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 15:08:50 +03:00
dependabot[bot]
0dd5e08d64
chore(deps): bump library/golang from 2a2b4b5 to fcdb3e4 in /test/container (#27309)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 14:50:03 +03:00
dependabot[bot]
d65af147d2
chore(deps): bump actions/upload-artifact from 7.0.0 to 7.0.1 (#27310)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 14:47:01 +03:00
dependabot[bot]
c9b2e4b359
chore(deps): bump softprops/action-gh-release from 2.6.1 to 3.0.0 (#27311)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 14:45:58 +03:00
dependabot[bot]
579fbab195
chore(deps): bump docker/build-push-action from 7.0.0 to 7.1.0 (#27312)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 14:24:51 +03:00
dependabot[bot]
c4f3e389a2
chore(deps): bump actions/create-github-app-token from 3.0.0 to 3.1.1 (#27313)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-13 14:24:15 +03:00
Blake Pettersson
bd823728ac
test: fix helm test flake (#27275)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-12 23:18:42 -10:00
AvivGuiser
de9416137d
feat: add action to restart StrimziPodSet (#27266)
Signed-off-by: AvivGuiser <avivguiser@gmail.com>
2026-04-13 09:54:06 +03:00
dependabot[bot]
73962555bb
chore(deps): bump the otel group across 1 directory with 2 updates (#27217)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-12 20:17:56 +03:00
dependabot[bot]
b2a8bc99e4
chore(deps): bump actions/setup-node from 4.4.0 to 6.3.0 (#27244)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-12 10:26:38 -04:00
Bram
cde9db8b29
docs: Add Car & Classic to USERS.md (#27297)
Signed-off-by: Bram <bram@ceulemans.dev>
2026-04-12 10:25:57 -04:00
argoproj-renovate[bot]
85913f797e
chore(deps): update docker.io/library/golang:1.26.2 docker digest to fcdb3e4 (#27296)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-12 17:05:31 +03:00
dependabot[bot]
25e0c38363
chore(deps): bump golang.org/x/net from 0.52.0 to 0.53.0 (#27272)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-04-12 12:43:12 +03:00
github-actions[bot]
28e13c3ec3
[Bot] docs: Update Snyk report (#27294)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2026-04-12 07:35:57 +00:00
dependabot[bot]
9cfbeb72f0
chore(deps): bump github.com/mattn/go-isatty from 0.0.20 to 0.0.21 (#27245)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-12 10:29:29 +03:00
Rishabh Pandey
62422a9c30
docs: Improve wording in contributing guide (#27295)
Signed-off-by: Rishabh Pandey <32699563+allexistence@users.noreply.github.com>
2026-04-12 10:28:13 +03:00
dependabot[bot]
c90b922522
chore(deps): bump golang.org/x/crypto from 0.49.0 to 0.50.0 (#27268)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-12 10:13:20 +03:00
dependabot[bot]
a98eba200e
chore(deps): bump pnpm/action-setup from 4.1.0 to 5.0.0 (#27246)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-12 10:11:25 +03:00
dependabot[bot]
170b89fe7b
chore(deps): bump library/redis from 970b561 to 1f07381 in /test/container (#27273)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 14:51:38 -04:00
Eron Wright
1dd9075a72
fix(settings): only trigger reload for app.kubernetes.io/part-of=argocd secrets (#27213)
Signed-off-by: Eron Wright <eron.wright@akuity.io>
2026-04-10 09:56:43 -07:00
dependabot[bot]
38a3826df8
chore(deps): bump github.com/Azure/kubelogin from 0.2.16 to 0.2.17 (#27269)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 08:13:25 +02:00
dependabot[bot]
cd8a25c195
chore(deps): bump step-security/harden-runner from 2.16.1 to 2.17.0 (#27271)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-10 08:12:53 +02:00
dependabot[bot]
7b5b6a8744
chore(deps-dev): bump @types/dagre from 0.7.42 to 0.7.54 in /ui (#27184)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-09 22:22:54 +02:00
dudinea
3a6083cb2d
chore(ci): Enable harden runner blocking mode for workflows - part 1 (#27163) (#27256)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-04-09 22:29:47 +03:00
dudinea
fb82b16b2d
fix(ci): run yarn install with --frozen-lockfile (#27098) (#27099)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-04-09 19:37:28 +03:00
Nikolaos Astyrakakis
ae10c0c6c3
fix(hook): Fixed hook code issues that caused stuck applications on "Deleting" state (Issues #18355 and #17191) (#26724)
Signed-off-by: Nikolaos Astyrakakis <nastyrakakis@gmail.com>
2026-04-09 05:19:38 -10:00
dependabot[bot]
9e80e058e7
chore(deps): bump library/golang from 1.26.1 to 1.26.2 in /test/container (#27248)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-09 17:52:21 +03:00
Eoin Shaughnessy
4220eddbf3
docs: fix typos (#27254)
Signed-off-by: Eoin Shaughnessy <eoinsh@gmail.com>
2026-04-09 16:21:45 +03:00
Alexander Matyushentsev
422ef230fa
fix: Revert "fix: avoid calling UpdateRevisionForPaths unnecessary (#25151)" (#27241) 2026-04-09 06:39:05 -04:00
renovate[bot]
1fde0d075f
chore(deps): update dependency formidable to v2.1.3 [security] (#27233)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-09 12:14:05 +02:00
dependabot[bot]
f86cd078fc
chore(deps): bump github.com/coreos/go-oidc/v3 from 3.17.0 to 3.18.0 (#27247)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-09 09:52:19 +02:00
Alexander Matyushentsev
c3b498c2ae
fix: cancel log stream goroutines on client disconnect (#27243)
Signed-off-by: Alexander Matyushentsev <alexander@akuity.io>
2026-04-09 12:53:34 +05:30
Alexander Matyushentsev
ad310c2452
feat: replace error message in webhook handler with metrics (#27215)
Signed-off-by: Alexander Matyushentsev <alexander@akuity.io>
2026-04-08 13:59:58 -07:00
Sean Liao
6743cdf9cc
chore: don't read current user git configs in tests (#27172)
Signed-off-by: Sean Liao <sean@liao.dev>
2026-04-08 10:53:50 -04:00
renovate[bot]
19983129f2
chore(deps): update docker.io/library/ubuntu:26.04 docker digest to cc925e5 (#27232)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-04-08 14:19:19 +00:00
renovate[bot]
880433f03b
chore(deps): update docker.io/library/golang:1.26.1 docker digest to cd78d88 (#27231)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-04-08 14:15:30 +00:00
argoproj-renovate[bot]
34b38428e9
chore(deps): update group golang to v1.26.2 (#27224)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-08 09:36:13 -04:00
Blake Pettersson
f5f3bf8a06
chore: migrate to pnpm (#23937)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-08 09:31:19 -04:00
Blake Pettersson
f1388674cc
chore: migrate to cluster informer (#27206)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-08 09:26:21 -04:00
Nitish Kumar
212f51d851
fix(sharding): fix log format verb and document intentional shard-0 fallback (#27222)
Signed-off-by: nitishfy <justnitish06@gmail.com>
Signed-off-by: Nitish Kumar <justnitish06@gmail.com>
Co-authored-by: Soumya Ghosh Dastidar <44349253+gdsoumya@users.noreply.github.com>
2026-04-08 09:25:45 -04:00
argoproj-renovate[bot]
d3b06f113f
chore(deps): update docker.io/library/golang:1.26.1 docker digest to cd78d88 (#27214)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-08 07:02:25 -04:00
dependabot[bot]
86fcb1447f
chore(deps): bump library/golang from 1.26.1 to 1.26.2 in /test/remote (#27216)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-08 07:01:59 -04:00
dependabot[bot]
12b241a56e
chore(deps): bump library/redis from 009cc37 to 970b561 in /test/container (#27218)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-08 07:01:40 -04:00
Blake Pettersson
a2b91ce309
feat: add depth option to ui (#26618)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-08 15:29:08 +05:30
Atif Ali
57dfe55e70
fix: prevent automatic refreshes from informer resync and status updates (#25290)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-04-07 18:25:51 -04:00
Alexandre Gaudreault
8c29202f1c
fix(hydrator): fix race condition in status update with hydrate annotation (#27183)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-07 17:01:04 -04:00
Linghao Su
8e2571fdca
fix(ui): handle 401 error in stream (#26917)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
2026-04-07 12:04:46 -07:00
Michael Crenshaw
21b826e204
docs: Revise vulnerability reporting and remove bounty details (#27212)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-04-07 14:23:42 -04:00
argoproj-renovate[bot]
047c0ae734
chore(deps): update docker.io/library/golang:1.26.1 docker digest to 5e69504 (#27211)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-07 15:14:09 +00:00
Michael Crenshaw
4c42071c7b
fix(ci): openssf scorecard doesn't allow global vars (#27203)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-04-07 11:01:10 -04:00
argoproj-renovate[bot]
6d3e641cca
chore(deps): update docker.io/library/golang:1.26.1 docker digest to 42ebbf7 (#27205)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-04-07 10:14:42 -04:00
dependabot[bot]
884ba71afc
chore(deps): bump library/registry from 3.0 to 3.1 in /test/container (#27201)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 10:14:23 -04:00
Vikas Rao
48f18e2905
feat: add toggle-auto-sync resource action for Application (#21564) (#26477)
Signed-off-by: vikasrao23 <vikasrao23@users.noreply.github.com>
Co-authored-by: vikasrao23 <vikasrao23@users.noreply.github.com>
2026-04-07 06:12:32 -04:00
Vedant Mhatre
b8da88a288
docs: clarify Helm hook delete-policy semantics (#26828)
Signed-off-by: Vedant-Mhatre <vedantmh@gmail.com>
2026-04-07 12:56:58 +03:00
dependabot[bot]
7262e61704
chore(deps): bump step-security/harden-runner from 2.16.0 to 2.16.1 (#27202)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-07 04:14:06 +00:00
dudinea
364bd00647
chore(ci): add Step Security Harden Runner to workflows in audit mode (#27168)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-04-06 23:00:50 -04:00
Regina Voloshin
9a05e0e7f3
fix(ui): placate sonar with adding compare function for repo path sort autocomplete (#26906)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-04-07 01:09:30 +02:00
dependabot[bot]
71da5f64ba
chore(deps): bump actions/create-github-app-token from 1.12.0 to 2.1.1 (#24360)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 21:56:29 +00:00
dependabot[bot]
9f723393e8
chore(deps): bump the otel group across 1 directory with 4 updates (#27174)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 21:53:42 +00:00
dependabot[bot]
ba4d2a2104
chore(deps): bump google.golang.org/grpc from 1.79.3 to 1.80.0 (#27119)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 17:14:07 -04:00
dependabot[bot]
0e4f7c857d
chore(deps): bump github.com/itchyny/gojq from 0.12.18 to 0.12.19 (#27118)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 17:13:20 -04:00
dependabot[bot]
bb66ffe0fa
chore(deps): bump github.com/google/go-jsonnet from 0.21.0 to 0.22.0 (#26992)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 17:12:55 -04:00
Sean Liao
45a32a5c32
fix: use unique names for initial commits (#27171)
Signed-off-by: Sean Liao <sean@liao.dev>
2026-04-06 19:44:31 +00:00
Alexandre Gaudreault
f298f4500f
fix(hydrator): preserve all source type fields in GetDrySource() (#27189)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-04-06 18:46:43 +00:00
dependabot[bot]
86a245c8bc
chore(deps): bump lodash from 4.17.23 to 4.18.1 in /ui (#27135)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 14:44:22 -04:00
dependabot[bot]
3c47518db4
chore(deps): bump lodash-es from 4.17.23 to 4.18.1 in /ui (#27120)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 14:43:59 -04:00
S Kevin Joe Harris
5a11160e9c
chore(deps): Bump create-github-app-token to v3 (#27191)
Signed-off-by: Kevin Joe Harris <kevinjoeharris1@gmail.com>
2026-04-06 13:04:53 -04:00
dependabot[bot]
721a7e722e
chore(deps): bump github.com/ktrysmt/go-bitbucket from 0.9.94 to 0.9.95 (#27175)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 12:14:34 -04:00
S Kevin Joe Harris
7af68d277f
chore(deps): bump github/codeql-action bundle to v4.35.1 (#27068)
Signed-off-by: Kevin Joe Harris <kevinjoeharris1@gmail.com>
2026-04-06 15:14:08 +00:00
dependabot[bot]
54f9cf08e4
chore(deps): bump the aws-sdk-v2 group with 3 updates (#27186)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 16:52:29 +02:00
dependabot[bot]
719ac073d8
chore(deps): bump docker/login-action from 4.0.0 to 4.1.0 (#27138)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 10:29:55 -04:00
dependabot[bot]
fc03869180
chore(deps-dev): bump webpack from 5.94.0 to 5.105.4 in /ui (#27185)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 10:27:54 -04:00
dependabot[bot]
bb2cfd9553
chore(deps): bump renovatebot/github-action from 46.1.7 to 46.1.8 (#27176)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-06 10:26:42 -04:00
Blake Pettersson
43d94f2b55
chore: group aws sdk v2 prs (#27179)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-04-06 16:59:41 +03:00
Regina Voloshin
321153a69e
docs: Update releasing.md with handling a failed release (#26049)
Signed-off-by: Regina Voloshin <regina.voloshin@codefresh.io>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2026-04-06 16:16:14 +05:30
T.dev :)
873f63aa0d
chore: align Go version to 1.26.1 across repository (#27112)
Signed-off-by: T.dev :) <120010745+thev1ndu@users.noreply.github.com>
2026-04-05 10:55:46 -04:00
dependabot[bot]
b018313aec
chore(deps): bump library/ubuntu from 730382b to a072b64 in /test/container (#27137)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-05 10:26:20 -04:00
Oliver Gondža
d449294f03
fix(docs): Fix manifest path in Source Hydrator docs (#27123)
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
2026-04-05 10:14:38 -04:00
github-actions[bot]
908ce7ee49
[Bot] docs: Update Snyk report (#27162)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2026-04-05 01:04:54 +00:00
Rohan Sood
68cbd05e52
fix: Add X-Frame-Options and CSP headers to Swagger UI endpoints (#26521)
Signed-off-by: rohansood10 <rohansood10@users.noreply.github.com>
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: rohansood10 <rohansood10@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-04-04 00:58:32 +00:00
dependabot[bot]
e21d471965
chore(deps): bump picomatch from 2.3.1 to 2.3.2 in /ui (#27017)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-04 01:37:57 +02:00
dependabot[bot]
04e4e080df
chore(deps): bump flatted from 3.3.1 to 3.4.2 in /ui (#26928)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-04 01:37:09 +02:00
dependabot[bot]
0c4946f12f
chore(deps): bump minimatch from 3.1.3 to 3.1.4 in /ui (#26641)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-04 01:33:37 +02:00
dependabot[bot]
88663928f6
chore(deps): bump github.com/go-jose/go-jose/v3 from 3.0.1 to 3.0.5 (#27142)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-03 20:08:58 +02:00
dependabot[bot]
5c03a8b37d
chore(deps): bump github.com/aws/smithy-go from 1.24.2 to 1.24.3 (#27141)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-03 20:08:32 +02:00
Seitaro Fujigaki
490f02116c
docs: fix submit-your-pr rebase target to upstream/master (#27144)
Signed-off-by: seitarof <pyotarou@icloud.com>
2026-04-03 20:07:07 +02:00
Seitaro Fujigaki
82789b7071
refactor: use new(expr) for pointer literals in Go 1.26 (#27143)
Signed-off-by: seitarof <pyotarou@icloud.com>
2026-04-03 10:57:32 -04:00
dependabot[bot]
5fa0045311
chore(deps): bump SonarSource/sonarqube-scan-action from 7.0.0 to 7.1.0 (#27116)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-02 15:47:38 -04:00
dependabot[bot]
44e08631f2
chore(deps): bump library/ubuntu from 91832dc to 730382b in /test/container (#27117)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-02 15:47:21 -04:00
Leonardo Luz Almeida
62670d6595
fix: address SSD applier nil pointer in error cases (#27126)
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2026-04-02 14:38:28 +00:00
Kanika Rana
fabbbbe6ee
test(e2e): add e2e tests for reverse deletionOrder when progressive sync enabled (#26673)
Signed-off-by: Kanika Rana <krana@redhat.com>
2026-04-02 10:23:38 -04:00
Alexy Mantha
3eebbcb33b
feat: use impersonation for server operations (logs, delete, etc) #22996 (#26898)
Signed-off-by: Alexy Mantha <alexy@mantha.dev>
2026-04-02 05:48:29 -04:00
OpenGuidou
4259f467b0
fix(server): Ensure OIDC config is refreshed at server restart (#26913)
Signed-off-by: OpenGuidou <guillaume.doussin@gmail.com>
2026-04-01 17:54:22 -07:00
rumstead
32f23a446f
fix(controller): reduce secret deepcopies and deserialization (#27049)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
2026-04-01 16:48:36 -04:00
Ville Vesilehto
5101db5225
chore(deps): migrate to go.yaml.in/yaml/v3 (#27063)
Signed-off-by: Ville Vesilehto <ville@vesilehto.fi>
2026-04-01 16:34:18 +02:00
dependabot[bot]
a5073f1ecc
chore(deps): bump github.com/go-jose/go-jose/v4 from 4.1.3 to 4.1.4 (#27101)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 10:15:25 -04:00
dependabot[bot]
bd1cccfb9a
chore(deps): bump github.com/yuin/gopher-lua from 1.1.1 to 1.1.2 (#27100)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-04-01 10:14:34 -04:00
Mangaal Meetei
0e729cce34
feat(cli): add appset-namespace for appset command (#27022)
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
2026-04-01 13:37:33 +03:00
Nitish Kumar
fb1b240c9e
docs: add missing content for Automatic Retry with a limit section (#27092)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-03-31 20:04:10 +02:00
Anand Francis Joseph
c52bf66380
fix(appcontroller): application controller in core mode fails to sync when server.secretkey is missing (#26793)
Signed-off-by: anandf <anjoseph@redhat.com>
2026-03-31 13:26:11 -04:00
Jaewoo Choi
e00345bff7
docs: replace resource_hooks links with sync-waves (#26187)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-03-31 19:37:42 +03:00
Oliver Gondža
c3c12c1cad
fix(commitserver): Static analysis fixes (#27085)
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
2026-03-31 15:15:04 +02:00
Dan Garfield
e96063557a
fix(docs): Fix formatting and clarity about requestedScopes in Keycloak integration docs (#27019)
Signed-off-by: Dan Garfield <dan.garfield@octopus.com>
Signed-off-by: Dan Garfield <dan@codefresh.io>
2026-03-31 12:44:29 +03:00
S Kevin Joe Harris
bfe5cfb587
chore: New gif for docs (#27081)
Signed-off-by: Kevin Joe Harris <kevinjoeharris1@gmail.com>
2026-03-31 11:31:20 +03:00
Blake Pettersson
393152ddad
fix: pass repo.insecure flag to helm dependency build (#27078)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-30 15:40:21 -07:00
Max Verbeek
1042e12c6a
fix: force attempt http2 with custom tls config (#26975) (#26976)
Signed-off-by: Max Verbeek <m4xv3rb33k@gmail.com>
2026-03-30 16:39:56 +02:00
dependabot[bot]
0191c1684d
chore(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.32.11 to 1.32.12 (#26844)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 09:05:43 -04:00
dependabot[bot]
ab0070994b
chore(deps): bump renovatebot/github-action from 46.1.6 to 46.1.7 (#27065)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 00:50:19 -04:00
dependabot[bot]
da7a61b75c
chore(deps): bump actions/setup-go from 6.3.0 to 6.4.0 (#27066)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 00:49:15 -04:00
dependabot[bot]
a892317c67
chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.19.12 to 1.19.13 (#27032)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-30 00:02:24 +00:00
dependabot[bot]
303e001b8b
chore(deps): bump codecov/codecov-action from 5.5.4 to 6.0.0 (#27030)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-29 23:38:06 +00:00
dependabot[bot]
d75a6b1523
chore(deps): bump sigstore/cosign-installer from 4.1.0 to 4.1.1 (#27031)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-29 23:15:44 +00:00
Navneet Shahi
1dc2ad04ff
feat: add health check for karpenter.sh/NodeClaim (#26876)
Signed-off-by: Navneet Shahi <navneetshahi345@gmail.com>
2026-03-29 16:04:39 -04:00
renovate[bot]
9ceaf0e8ee
chore(deps): update actions/create-github-app-token action to v2.2.2 (#27034)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-29 15:59:11 -04:00
dependabot[bot]
6a22728fd5
chore(deps): bump github.com/aws/aws-sdk-go-v2/service/codecommit from 1.33.11 to 1.33.12 (#27035)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-29 15:58:44 -04:00
dependabot[bot]
0c02de795e
chore(deps): bump github.com/aws/aws-sdk-go-v2/service/sts from 1.41.9 to 1.41.10 (#27037)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-29 15:58:12 -04:00
renovate[bot]
8e0b6e689a
chore(deps): update codecov/codecov-action action to v5.5.4 (#27038)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-29 15:57:06 -04:00
Regina Voloshin
5aa83735f2
ci: pin images of setup-qemu-action, setup-buildx-action and goreleaser CLI version (#27060)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-03-29 15:56:28 -04:00
dudinea
36f4ff7f35
fix(ci): pin goreman version used in ci-build.yaml (#27062) (#27061)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-03-29 15:55:19 -04:00
dudinea
99c51dfd2c
fix(ci): renovatebot action uses floating image tag (#27023) (#27024)
Signed-off-by: Eugene Doudine <eugene.doudine@octopus.com>
2026-03-29 16:29:54 +03:00
github-actions[bot]
a4c7f82c5b
[Bot] docs: Update Snyk report (#27056)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2026-03-29 10:51:31 +00:00
Atif Ali
759e746e87
fix: invalid URL or protocol not validated consistently by server and UI (#27052)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-03-27 23:15:03 -04:00
Christopher Coco
94d8ba92a8
docs: update cosign install and docs links (#27042)
Signed-off-by: Christopher Coco <ccoco@redhat.com>
2026-03-27 22:55:05 +01:00
Pratik Lawate
b532528a0b
docs: fix README GitHub branding in community section (#27050)
Signed-off-by: Pratik Lawate <39809928+pratik268@users.noreply.github.com>
2026-03-27 21:28:22 +01:00
Alexander Matyushentsev
8705f6965e
Merge pull request #27029 from pierluigilenoci/fix/honor-stderrthreshold
fix: honor stderrthreshold when logtostderr is enabled
2026-03-27 18:23:35 +01:00
Alexander Matyushentsev
4aeca2fbf8
Merge pull request #27001 from leoluz/fix-ssd-npe
fix: address nil pointer when SSD returns error
2026-03-27 18:16:04 +01:00
Jonathan Ogilvie
2bbf91c0cf
fix: improve perf: switch parentUIDToChildren to map of sets, remove cache rebuild (#26863) (#26864)
Signed-off-by: Jonathan Ogilvie <jonathan.ogilvie@sumologic.com>
Signed-off-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
2026-03-27 09:17:36 -04:00
Pierluigi Lenoci
84442e03bc
Honor stderrthreshold when logtostderr is enabled
Opt into the fixed klog behavior by setting legacy_stderr_threshold_behavior=false
after klog.InitFlags(). Ref: kubernetes/klog#212, kubernetes/klog#432

Signed-off-by: Pierluigi Lenoci <pierluigi.lenoci@gmail.com>
2026-03-27 00:18:54 +01:00
Zach Aller
f97e2d2844
fix: wrong installation id returned from cache (#26969)
Signed-off-by: Zach Aller <zach_aller@intuit.com>
2026-03-26 16:57:21 -04:00
dependabot[bot]
e972bfca78
chore(deps): bump yaml from 1.10.2 to 1.10.3 in /ui (#27015)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-26 10:42:29 -04:00
Jaewoo Choi
1b405ce2b5
feat(ui): search filter by target revision (#24038)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-03-26 12:17:33 +02:00
Jaewoo Choi
45b926d796
fix(ui): show clear-all button for annotation-only filters (#26937)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-03-26 12:08:13 +02:00
dependabot[bot]
d4ec3282d4
chore(deps): bump library/redis from 8.6.1 to 8.6.2 in /test/container (#26991)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-25 12:56:16 -04:00
dependabot[bot]
4e3904a554
chore(deps): bump library/busybox from b3255e7 to 1487d0a in /test/e2e/multiarch-container (#26990)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-25 12:54:35 -04:00
Alexander Matyushentsev
8981a5b855
fix: controller incorrectly detecting diff during app normalization (#27002)
Signed-off-by: Alexander Matyushentsev <alexander@akuity.io>
2026-03-25 12:32:07 +00:00
Leonardo Luz Almeida
ab27dd3ccf
fix: address nil pointer when SSD returns error
Signed-off-by: Leonardo Luz Almeida <leonardo_almeida@intuit.com>
2026-03-25 12:34:32 +01:00
Regina Voloshin
269e0b850b
fix: Hook resources not created at PostSync when configured with PreDelete PostDelete hooks (#26996)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-03-25 12:09:14 +02:00
Nolan Emirot
3f15cc6c9e
chore: fix bad indentation (#26989)
Signed-off-by: emirot <emirot.nolan@gmail.com>
2026-03-24 18:58:21 -04:00
Alberto Chiusole
25df43d7a0
fix(ui): Improve message on self-healing disabling panel (#26977) (#26978)
Signed-off-by: Alberto Chiusole <chiusole@seqera.io>
2026-03-24 15:50:31 +02:00
dependabot[bot]
6b35246605
chore(deps): bump library/golang from c7e98cc to 595c784 in /test/container (#26960)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 12:51:02 -04:00
dependabot[bot]
bd7b16cbeb
chore(deps): bump renovatebot/github-action from 46.1.5 to 46.1.6 (#26961)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-23 07:26:22 -04:00
argoproj-renovate[bot]
e1bb509264
chore(deps): update module github.com/golangci/golangci-lint/v2 to v2.11.4 (#26957)
Signed-off-by: renovate[bot] <renovate[bot]@users.noreply.github.com>
Co-authored-by: argoproj-renovate[bot] <161757507+argoproj-renovate[bot]@users.noreply.github.com>
2026-03-22 16:14:59 -04:00
dancer13
3570031fa8
docs: fix typo in metrics (#26951)
Signed-off-by: dancer13 <alfredotic0809@gmail.com>
2026-03-22 03:01:26 -06:00
github-actions[bot]
3eee5e3f52
[Bot] docs: Update Snyk report (#26950)
Signed-off-by: CI <ci@argoproj.com>
Co-authored-by: CI <ci@argoproj.com>
2026-03-22 08:03:59 +00:00
Oliver Gondža
77732d89b3
docs: Formatting and style for source-hydrator.md (#26949)
Signed-off-by: Oliver Gondža <ogondza@gmail.com>
2026-03-21 10:35:47 +01:00
Honarkhah
4aabf526c8
fix: typo in error message for multi-source apps (#26936)
Signed-off-by: Honarkhah <m.honar@gmail.com>
2026-03-20 10:55:47 -04:00
dependabot[bot]
24c3abd8dd
chore(deps): bump library/ubuntu from 5798086 to 91832dc in /test/container (#26930)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-20 09:57:04 -04:00
Linghao Su
91d83d37c4
fix(server): fix find container logic for terminal (#26858)
Signed-off-by: linghaoSu <linghao.su@daocloud.io>
2026-03-19 23:37:39 -10:00
dependabot[bot]
aabe8524ba
chore(deps): bump library/redis from a019c00 to 315270d in /test/container (#26902)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 12:56:40 -04:00
Papapetrou Patroklos
fe30b2c60a
fix: trigger app sync on app-set spec change (#26811)
Signed-off-by: Patroklos Papapetrou <ppapapetrou76@gmail.com>
2026-03-19 10:31:07 +00:00
dependabot[bot]
148c86ad42
chore(deps): bump actions/cache from 5.0.3 to 5.0.4 (#26901)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Regina Voloshin <regina.voloshin@codefresh.io>
2026-03-19 07:31:08 +00:00
dependabot[bot]
30db355197
chore(deps): bump codecov/codecov-action from 5.5.2 to 5.5.3 (#26900)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 08:31:09 +02:00
Andy Lo-A-Foe
442aed496f
fix: prevent panic on nil APIResource in permission validator (#26610)
Signed-off-by: Andy Lo-A-Foe <andy.loafoe@gmail.com>
2026-03-18 14:27:24 -04:00
Blake Pettersson
87ccebc51a
chore(ci): remove cherry-pick branch if already present (#26881)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 14:23:20 -04:00
dependabot[bot]
20439902eb
chore(deps): bump google.golang.org/grpc from 1.79.2 to 1.79.3 (#26886)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 17:43:25 +00:00
Michael Crenshaw
559da44135
chore(deps): bump Helm to 3.20.1 (#26896)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-03-18 13:42:58 -04:00
Blake Pettersson
a87aab146e
chore(ci): attempt to make test less flaky (#26890)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 13:02:42 -04:00
Andrea Matera
d34e83f60c
chore: add Mollie to USERS.md (#26895)
Signed-off-by: Andrea Matera <andrea.matera@mollie.com>
2026-03-18 15:13:44 +00:00
Michael Crenshaw
566c172058
feat(ui): add GitOps Promoter resource icon (#26894)
Signed-off-by: Michael Crenshaw <350466+crenshaw-dev@users.noreply.github.com>
2026-03-18 10:52:20 -04:00
dependabot[bot]
d80a122502
chore(deps): bump library/ubuntu from fed6ddb to 5798086 in /test/container (#26887)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 14:38:29 +00:00
Ekamveer Walia
539c35b295
docs: fix incorrect wording for ApplicationSets in other namespaces (#26893) 2026-03-18 13:44:10 +00:00
Blake Pettersson
45a84dfa38
fix(ci): add .gitkeep to images dir (#26892)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-18 09:37:18 -04:00
Mangaal Meetei
d011b7b508
fix: Bitbucket webhook diffstat does not work with upper case repo slug (#26594)
Signed-off-by: Mangaal <angommeeteimangaal@gmail.com>
2026-03-18 07:50:32 -04:00
Huynh Duc Tran
f1b922765d
chore: add Techcom Securities to USERS.md (#26889)
Signed-off-by: Tran Huynh Duc <duchuynhtran12a1@gmail.com>>
Signed-off-by: Duck <duchuynhtran12a1@gmail.com>
2026-03-18 15:22:26 +05:30
Jaewoo Choi
4b4bbc8bb2
fix(ui): include _-prefixed dirs in embedded assets (#26589)
Signed-off-by: choejwoo <jaewoo45@gmail.com>
2026-03-17 16:55:20 -06:00
Atif Ali
c5d1c914bb
fix(UI): show RollingSync step clearly when labels match no step (#26877)
Signed-off-by: Atif Ali <atali@redhat.com>
2026-03-17 23:05:29 +01:00
dependabot[bot]
59aea0476a
chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.19.11 to 1.19.12 (#26840)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-17 23:01:40 +01:00
Nitish Kumar
4cdc650a58
feat(helm): support wildcard glob patterns for valueFiles (#26768)
Signed-off-by: nitishfy <justnitish06@gmail.com>
2026-03-17 21:37:43 +00:00
Blake Pettersson
2b6489828b
chore: allow multiple signoff lines (#26875)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-17 21:06:28 +00:00
Alexander Matyushentsev
92c3ef2559
fix: avoid scanning symlinks in whole repo on each app manifest operation (#26718)
Signed-off-by: Alexander Matyushentsev <AMatyushentsev@gmail.com>
2026-03-17 13:40:16 -07:00
Alexandre Gaudreault
4070b6feea
docs: add warning in orphan resource doc (#26874)
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-03-17 12:54:01 -04:00
Jonathan Ogilvie
67db597810
fix: stack overflow when processing circular ownerrefs in resource graph (#26783) (#26790)
Signed-off-by: Jonathan Ogilvie <jonathan.ogilvie@sumologic.com>
Signed-off-by: Jonathan Ogilvie <679297+jcogilvie@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-03-17 12:03:23 -04:00
rumstead
5b3073986f
feat(appset): add concurrency when managing applications (#26642)
Signed-off-by: rumstead <37445536+rumstead@users.noreply.github.com>
Signed-off-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
Co-authored-by: Alexandre Gaudreault <alexandre_gaudreault@intuit.com>
2026-03-17 15:04:11 +00:00
Kit Dallege
5ceb8354e6
docs: add orphaned resources FAQ entry (#26833)
Signed-off-by: kovan <xaum.io@gmail.com>
2026-03-17 10:53:24 -04:00
S Kevin Joe Harris
79922c06d6
ci: Improve Go build timing with effective caching (#26628)
Signed-off-by: Kevin Joe Harris <kevinjoeharris1@gmail.com>
Co-authored-by: Nitish Kumar <justnitish06@gmail.com>
2026-03-17 20:12:09 +05:30
Sinhyeok Seo
382c507beb
fix(server): Cache glob patterns to improve RBAC evaluation performance (#25759)
Signed-off-by: Sinhyeok Seo <sinhyeok@gmail.com>
Signed-off-by: Sinhyeok Seo <44961659+Sinhyeok@users.noreply.github.com>
2026-03-17 10:22:23 -04:00
dependabot[bot]
8142920ab8
chore(deps): bump library/redis from 1c054d5 to a019c00 in /test/container (#26865)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-17 07:08:27 -04:00
Blake Pettersson
47a0746851
chore(renovate): group aws-sdk-v2-updates (#26848)
Signed-off-by: Blake Pettersson <blake.pettersson@gmail.com>
2026-03-17 11:39:00 +02:00
Regina Voloshin
13cd517470
docs: move releases to Tuesdays (#26859)
Signed-off-by: reggie-k <regina.voloshin@codefresh.io>
2026-03-16 18:27:47 +02:00
Christopher Coco
63a009effa
fix(test): make fail message better for TestAuthReconcileWithMissingNamespace (#26856)
Signed-off-by: Christopher Coco <ccoco@redhat.com>
2026-03-16 03:13:40 -10:00
github-actions[bot]
5a6c83229b
chore: Bump version in master (#26855)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: reggie-k <19544836+reggie-k@users.noreply.github.com>
2026-03-16 14:44:45 +02:00
dependabot[bot]
f409135f17
chore(deps): bump softprops/action-gh-release from 2.5.0 to 2.6.1 (#26838)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 13:00:58 +02:00
493 changed files with 992756 additions and 37970 deletions

View file

@ -11,6 +11,7 @@ module.exports = {
"github>argoproj/argo-cd//renovate-presets/custom-managers/yaml.json5",
"github>argoproj/argo-cd//renovate-presets/fix/disable-all-updates.json5",
"github>argoproj/argo-cd//renovate-presets/devtool.json5",
"github>argoproj/argo-cd//renovate-presets/docs.json5"
"github>argoproj/argo-cd//renovate-presets/docs.json5",
"group:aws-sdk-go-v2Monorepo"
]
}

View file

@ -8,6 +8,9 @@ updates:
ignore:
- dependency-name: k8s.io/*
groups:
aws-sdk-v2:
patterns:
- "github.com/aws/aws-sdk-go-v2*"
otel:
patterns:
- "go.opentelemetry.io/*"

View file

@ -5,7 +5,7 @@
},
"CHECKS": {
"prefixes": ["[Bot] docs: "],
"regexp": "^(refactor|feat|fix|docs|test|ci|chore)!?(\\(.*\\))?!?:.*"
"regexp": "^(refactor|feat|fix|docs|test|ci|chore|revert)!?(\\(.*\\))?!?:.*"
},
"MESSAGES": {
"success": "PR title is valid",

View file

@ -4,6 +4,10 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
prepare-release:
permissions:
@ -12,6 +16,12 @@ jobs:
name: Automatically update major version
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@ -37,7 +47,7 @@ jobs:
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Add ~/go/bin to PATH
@ -86,4 +96,4 @@ jobs:
- [ ] Add an upgrade guide to the docs for this version
branch: bump-major-version
branch-suffix: random
signoff: true
signoff: true

View file

@ -25,14 +25,24 @@ on:
CHERRYPICK_APP_PRIVATE_KEY:
required: true
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
cherry-pick:
name: Cherry Pick to ${{ inputs.version_number }}
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Generate a token
id: generate-token
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
uses: actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3.1.1
with:
app-id: ${{ secrets.CHERRYPICK_APP_ID }}
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
@ -66,6 +76,7 @@ jobs:
# Create new branch for cherry-pick
CHERRY_PICK_BRANCH="cherry-pick-${{ inputs.pr_number }}-to-${TARGET_BRANCH}"
git checkout -b "$CHERRY_PICK_BRANCH" "origin/$TARGET_BRANCH"
# Perform cherry-pick
@ -75,12 +86,17 @@ jobs:
# Extract Signed-off-by from the cherry-pick commit
SIGNOFF=$(git log -1 --pretty=format:"%B" | grep -E '^Signed-off-by:' || echo "")
# Push the new branch
git push origin "$CHERRY_PICK_BRANCH"
# Push the new branch. Force push to ensure that in case the original cherry-pick branch is stale,
# that the current state of the $TARGET_BRANCH + cherry-pick gets in $CHERRY_PICK_BRANCH.
git push origin -f "$CHERRY_PICK_BRANCH"
# Save data for PR creation
echo "branch_name=$CHERRY_PICK_BRANCH" >> "$GITHUB_OUTPUT"
echo "signoff=$SIGNOFF" >> "$GITHUB_OUTPUT"
{
echo "signoff<<EOF"
echo "$SIGNOFF"
echo "EOF"
} >> "$GITHUB_OUTPUT"
echo "target_branch=$TARGET_BRANCH" >> "$GITHUB_OUTPUT"
else
echo "❌ Cherry-pick failed due to conflicts"

View file

@ -6,6 +6,10 @@ on:
- master
types: ["labeled", "closed"]
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
find-labels:
name: Find Cherry Pick Labels
@ -18,6 +22,12 @@ jobs:
outputs:
labels: ${{ steps.extract-labels.outputs.labels }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Extract cherry-pick labels
id: extract-labels
run: |
@ -50,4 +60,4 @@ jobs:
pr_title: ${{ github.event.pull_request.title }}
secrets:
CHERRYPICK_APP_ID: ${{ vars.CHERRYPICK_APP_ID }}
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}

View file

@ -14,7 +14,9 @@ on:
env:
# Golang version to use across CI steps
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.26.0'
GOLANG_VERSION: '1.26.2'
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@ -31,8 +33,13 @@ jobs:
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
docs: ${{ steps.filter.outputs.docs_any_changed }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
- uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
id: filter
with:
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
@ -54,10 +61,15 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Download all Go modules
@ -74,18 +86,27 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
- name: Download all Go modules
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Download Go modules
run: |
go mod download
- name: Compile all packages
@ -101,17 +122,22 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Run golangci-lint
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
with:
# renovate: datasource=go packageName=github.com/golangci/golangci-lint/v2 versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
version: v2.11.3
version: v2.11.4
args: --verbose
test-go:
@ -125,6 +151,11 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
@ -132,7 +163,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@ -151,11 +182,15 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@ -167,13 +202,13 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download and vendor all required packages
- name: Download Go modules
run: |
go mod download
- name: Run all unit tests
run: make test-local
- name: Generate test results artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: test-results
path: test-results
@ -189,6 +224,11 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Create checkout directory
run: mkdir -p ~/go/src/github.com/argoproj
- name: Checkout code
@ -196,7 +236,7 @@ jobs:
- name: Create symlink in GOPATH
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Install required packages
@ -215,11 +255,15 @@ jobs:
- name: Add /usr/local/bin to PATH
run: |
echo "/usr/local/bin" >> $GITHUB_PATH
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@ -231,13 +275,13 @@ jobs:
run: |
git config --global user.name "John Doe"
git config --global user.email "john.doe@example.com"
- name: Download and vendor all required packages
- name: Download Go modules
run: |
go mod download
- name: Run all unit tests
run: make test-race-local
- name: Generate test results artifacts
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: race-results
path: test-results/
@ -249,10 +293,15 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Create symlink in GOPATH
@ -306,26 +355,31 @@ jobs:
needs:
- changes
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
with:
package_json_file: ui/package.json
- name: Setup NodeJS
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '22.9.0'
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
node-version: '24.14.1'
cache: 'pnpm'
cache-dependency-path: '**/pnpm-lock.yaml'
- name: Install node dependencies
run: |
cd ui && yarn install --frozen-lockfile --ignore-optional --non-interactive
cd ui && pnpm i --frozen-lockfile
- name: Build UI code
run: |
yarn test
yarn build
pnpm test
pnpm build
env:
NODE_ENV: production
NODE_ONLINE_ENV: online
@ -334,7 +388,7 @@ jobs:
CODECOV_TOKEN: ${{ github.ref == 'refs/heads/master' && secrets.CODECOV_TOKEN || '' }}
working-directory: ui/
- name: Run ESLint
run: yarn lint
run: pnpm lint
working-directory: ui/
shellcheck:
@ -359,19 +413,15 @@ jobs:
sonar_secret: ${{ secrets.SONAR_TOKEN }}
codecov_secret: ${{ secrets.CODECOV_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
fetch-depth: 0
- name: Restore node dependency cache
id: cache-dependencies
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
with:
path: ui/node_modules
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
- name: Remove other node_modules directory
run: |
rm -rf ui/node_modules/argo-ui/node_modules
- name: Get e2e code coverage
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
@ -392,7 +442,7 @@ jobs:
- name: Upload code coverage information to codecov.io
# Only run when the workflow is for upstream (PR target or push is in argoproj/argo-cd).
if: github.repository == 'argoproj/argo-cd'
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
files: test-results/full-coverage.out
fail_ci_if_error: true
@ -401,7 +451,7 @@ jobs:
- name: Upload test results to Codecov
# Codecov uploads test results to Codecov.io on upstream master branch.
if: github.repository == 'argoproj/argo-cd' && github.ref == 'refs/heads/master' && github.event_name == 'push'
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
with:
files: test-results/junit.xml
fail_ci_if_error: true
@ -411,7 +461,7 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
uses: SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 # v7.0.0
uses: SonarSource/sonarqube-scan-action@299e4b793aaa83bf2aba7c9c14bedbb485688ec4 # v7.1.0
if: env.sonar_secret != ''
test-e2e:
name: Run end-to-end tests
@ -444,6 +494,11 @@ jobs:
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Free Disk Space (Ubuntu)
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be
with:
@ -454,12 +509,21 @@ jobs:
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
- name: Set GOPATH
run: |
echo "GOPATH=$HOME/go" >> $GITHUB_ENV
- name: Setup NodeJS
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
with:
# renovate: datasource=node-version packageName=node versioning=node
node-version: '24.14.1'
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
with:
package_json_file: ui/package.json
- name: GH actions workaround - Kill XSP4 process
run: |
sudo pkill mono || true
@ -475,11 +539,15 @@ jobs:
sudo chown $(whoami) $HOME/.kube/config
sudo chmod go-r $HOME/.kube/config
kubectl version
- name: Restore go build cache
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
- name: Restore go build and module cache
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
with:
path: ~/.cache/go-build
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
path: |
~/.cache/go-build
~/go/pkg/mod
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-build-v1-
- name: Add ~/go/bin to PATH
run: |
echo "$HOME/go/bin" >> $GITHUB_PATH
@ -489,10 +557,12 @@ jobs:
- name: Add ./dist to PATH
run: |
echo "$(pwd)/dist" >> $GITHUB_PATH
- name: Download Go dependencies
- name: Download Go modules
run: |
go mod download
go install github.com/mattn/goreman@latest
- name: Install goreman
run: |
go install github.com/mattn/goreman@v0.3.17
- name: Install all tools required for building & testing
run: |
make install-test-tools-local
@ -534,13 +604,13 @@ jobs:
goreman run stop-all || echo "goreman trouble"
sleep 30
- name: Upload e2e coverage report
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: e2e-code-coverage
path: /tmp/coverage
if: ${{ matrix.k3s.latest }}
- name: Upload e2e-server logs
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: e2e-server-k8s${{ matrix.k3s.version }}.log
path: /tmp/e2e-server.log
@ -560,6 +630,11 @@ jobs:
- changes
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- run: |
result="${{ needs.test-e2e.result }}"
# mark as successful even if skipped

View file

@ -28,6 +28,10 @@ concurrency:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
CodeQL-Build:
permissions:
@ -39,18 +43,24 @@ jobs:
# CodeQL runs on ubuntu-latest and windows-latest
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout repository
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version-file: go.mod
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
uses: github/codeql-action/init@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
# Override language selection by uncommenting this and choosing your languages
# with:
# languages: go, javascript, csharp, python, cpp, java
@ -58,7 +68,7 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
uses: github/codeql-action/autobuild@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
@ -72,4 +82,4 @@ jobs:
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
uses: github/codeql-action/analyze@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2

View file

@ -45,6 +45,10 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
publish:
permissions:
@ -55,6 +59,12 @@ jobs:
outputs:
image-digest: ${{ steps.image.outputs.digest }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@ -67,16 +77,26 @@ jobs:
if: ${{ github.ref_type != 'tag'}}
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ inputs.go-version }}
cache: false
- name: Install cosign
uses: sigstore/cosign-installer@ba7bc0a3fef59531c69a25acd34668d6d3fe6f22 # v4.1.0
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
- uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
- uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Setup QEMU
uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
with:
image: tonistiigi/binfmt@sha256:d3b963f787999e6c0219a48dba02978769286ff61a5f4d26245cb6a6e5567ea3 #qemu-v10.0.4
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
with:
# buildkit v0.28.1
driver-opts: |
image=moby/buildkit@sha256:a82d1ab899cda51aade6fe818d71e4b58c4079e047a0cf29dbb93b2b0465ea69
- name: Setup tags for container image as a CSV type
run: |
@ -103,7 +123,7 @@ jobs:
echo 'EOF' >> $GITHUB_ENV
- name: Login to Quay.io
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: quay.io
username: ${{ secrets.quay_username }}
@ -111,7 +131,7 @@ jobs:
if: ${{ inputs.quay_image_name && inputs.push }}
- name: Login to GitHub Container Registry
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
registry: ghcr.io
username: ${{ secrets.ghcr_username }}
@ -119,7 +139,7 @@ jobs:
if: ${{ inputs.ghcr_image_name && inputs.push }}
- name: Login to dockerhub Container Registry
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
with:
username: ${{ secrets.docker_username }}
password: ${{ secrets.docker_password }}
@ -142,7 +162,7 @@ jobs:
- name: Build and push container image
id: image
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 #v7.0.0
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f #v7.1.0
with:
context: .
platforms: ${{ inputs.platforms }}

View file

@ -15,6 +15,10 @@ concurrency:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
set-vars:
permissions:
@ -31,6 +35,12 @@ jobs:
ghcr_provenance_image: ${{ steps.image.outputs.ghcr_provenance_image }}
allow_ghcr_publish: ${{ steps.image.outputs.allow_ghcr_publish }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Set image tag and names
@ -86,7 +96,7 @@ jobs:
with:
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.0
go-version: 1.26.2
platforms: ${{ needs.set-vars.outputs.platforms }}
push: false
@ -103,7 +113,7 @@ jobs:
ghcr_image_name: ${{ needs.set-vars.outputs.ghcr_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.0
go-version: 1.26.2
platforms: ${{ needs.set-vars.outputs.platforms }}
push: true
secrets:

View file

@ -14,6 +14,10 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
prepare-release:
permissions:
@ -28,6 +32,12 @@ jobs:
IMAGE_NAMESPACE: ${{ vars.IMAGE_NAMESPACE || 'argoproj' }}
IMAGE_REPOSITORY: ${{ vars.IMAGE_REPOSITORY || 'argocd' }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:

View file

@ -6,6 +6,10 @@ on:
permissions: {}
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
# PR updates can happen in quick succession leading to this
# workflow being trigger a number of times. This limits it
# to one run per PR.
@ -21,6 +25,12 @@ jobs:
name: Validate PR Title
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- uses: thehanimo/pr-title-checker@7fbfe05602bdd86f926d3fb3bccb6f3aed43bc70 # v1.4.3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -11,8 +11,10 @@ permissions: {}
env:
# renovate: datasource=golang-version packageName=golang
GOLANG_VERSION: '1.26.0' # Note: go-version must also be set in job argocd-image.with.go-version
GOLANG_VERSION: '1.26.2' # Note: go-version must also be set in job argocd-image.with.go-version
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
argocd-image:
needs: [setup-variables]
@ -26,7 +28,7 @@ jobs:
quay_image_name: ${{ needs.setup-variables.outputs.quay_image_name }}
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
# renovate: datasource=golang-version packageName=golang
go-version: 1.26.0
go-version: 1.26.2
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
secrets:
@ -47,6 +49,11 @@ jobs:
provenance_image: ${{ steps.var.outputs.provenance_image }}
allow_fork_release: ${{ steps.var.outputs.allow_fork_release }}
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@ -133,7 +140,7 @@ jobs:
run: git fetch --force --tags
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@ -159,10 +166,10 @@ jobs:
tool-cache: false
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0
uses: goreleaser/goreleaser-action@e24998b8b67b290c2fa8b7c14fcfa7de2c5c9b8c # v7.1.0
id: run-goreleaser
with:
version: latest
version: v2.14.3
args: release --clean --timeout 55m
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@ -218,8 +225,13 @@ jobs:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
with:
package_json_file: ui/package.json
- name: Setup Golang
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
with:
go-version: ${{ env.GOLANG_VERSION }}
cache: false
@ -231,28 +243,37 @@ jobs:
SPDX_GEN_VERSION: v0.0.13
# defines the sigs.k8s.io/bom version to use.
SIGS_BOM_VERSION: v0.2.1
# comma delimited list of project relative folders to inspect for package
# managers (gomod, yarn, npm).
PROJECT_FOLDERS: '.,./ui'
# full qualified name of the docker image to be inspected
DOCKER_IMAGE: ${{ needs.setup-variables.outputs.quay_image_name }}
run: |
yarn install --cwd ./ui
set -euo pipefail
pnpm install --dir ./ui --frozen-lockfile
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
go install sigs.k8s.io/bom/cmd/bom@$SIGS_BOM_VERSION
# Generate SPDX for project dependencies analyzing package managers
for folder in $(echo $PROJECT_FOLDERS | sed "s/,/ /g")
do
generator -p $folder -o /tmp
done
generator -p . -o /tmp
# Generate SPDX for binaries analyzing the docker image
if [[ ! -z $DOCKER_IMAGE ]]; then
bom generate -o /tmp/bom-docker-image.spdx -i $DOCKER_IMAGE
# When ui/ should use in-repo pnpm for `pnpm sbom` (11+):
# 1. In ui/package.json set "packageManager" to a pnpm 11+ release (e.g. pnpm@11.0.0), then from ./ui run
# `pnpm install` and commit the resulting ui/pnpm-lock.yaml so release CI's pnpm/action-setup matches.
# 2. Delete hack/generate-ui-pnpm-sbom.sh and remove the ./hack/generate-ui-pnpm-sbom.sh line below.
# 3. Uncomment:
# pnpm --dir ./ui sbom --sbom-format spdx --prod > /tmp/bom-ui-pnpm.spdx.json
./hack/generate-ui-pnpm-sbom.sh --write /tmp/bom-ui-pnpm.spdx.json
if [[ -n "${DOCKER_IMAGE:-}" ]]; then
bom generate -o /tmp/bom-docker-image.spdx -i "${DOCKER_IMAGE}"
fi
cd /tmp && tar -zcf sbom.tar.gz *.spdx
cd /tmp
shopt -s nullglob
spdx_files=( *.spdx )
shopt -u nullglob
if [[ ${#spdx_files[@]} -eq 0 ]]; then
echo "No .spdx files produced under /tmp"
exit 1
fi
tar -zcf sbom.tar.gz "${spdx_files[@]}" bom-ui-pnpm.spdx.json
- name: Generate SBOM hash
shell: bash
@ -264,7 +285,7 @@ jobs:
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
- name: Upload SBOM
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2.5.0
uses: softprops/action-gh-release@b4309332981a82ec1c5618f44dd2e27cc8bfbfda # v3.0.0
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:

View file

@ -7,14 +7,38 @@ on:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
renovate:
runs-on: ubuntu-24.04
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Harden the runner (Block unknown outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: block
disable-sudo-and-containers: "false" # renovatebot runs in `docker run`
allowed-endpoints: >
github.com:443
api.github.com:443
raw.githubusercontent.com:443
release-assets.githubusercontent.com:443
ghcr.io:443
pkg-containers.githubusercontent.com:443
hub.docker.com:443
proxy.golang.org:443
nodejs.org:443
pypi.org:443
get.helm.sh
registry.npmjs.org
- name: Get token
id: get_token
uses: actions/create-github-app-token@d72941d797fd3113feb6b93fd0dec494b13a2547 # v1
uses: actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3
with:
app-id: ${{ vars.RENOVATE_APP_ID }}
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
@ -22,11 +46,17 @@ jobs:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # 6.0.2
# Renovate do not pin their docker image versions to SHA, so
# when bumping renovate action version please check if renovate image
# has been updated (see it's numeric version in action.yaml)
# and update `renovate-version` parameter accordingly
- name: Self-hosted Renovate
uses: renovatebot/github-action@abd08c7549b2a864af5df4a2e369c43f035a6a9d #46.1.5
uses: renovatebot/github-action@83ec54fee49ab67d9cd201084c1ff325b4b462e4 #46.1.10
with:
configurationFile: .github/configs/renovate-config.js
token: '${{ steps.get_token.outputs.token }}'
renovate-image: "ghcr.io/renovatebot/renovate@sha256"
renovate-version: "5dfeab680f40edd2713b8fcae574824e60d2c831b8d89cc965e51621894c7084" #43
env:
LOG_LEVEL: 'debug'
RENOVATE_REPOSITORIES: '${{ github.repository }}'

View file

@ -29,6 +29,12 @@ jobs:
if: github.repository == 'argoproj/argo-cd'
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
- name: "Checkout code"
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
@ -54,7 +60,7 @@ jobs:
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
with:
name: SARIF file
path: results.sarif
@ -62,6 +68,6 @@ jobs:
# Upload the results to GitHub's code scanning dashboard.
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
uses: github/codeql-action/upload-sarif@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
with:
sarif_file: results.sarif

View file

@ -8,10 +8,23 @@ permissions:
issues: write
pull-requests: write
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
stale:
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Block unknown outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: block
disable-sudo-and-containers: "true"
allowed-endpoints: >
api.github.com:443
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}

View file

@ -7,6 +7,10 @@ on:
permissions:
contents: read
env:
# a workaround to disable harden runner
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
jobs:
snyk-report:
permissions:
@ -16,6 +20,12 @@ jobs:
name: Update Snyk report in the docs directory
runs-on: ubuntu-24.04
steps:
- name: Harden the runner (Audit all outbound calls)
if: ${{ vars.disable_harden_runner != 'true' }}
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
with:
egress-policy: audit
agent-enabled: "false"
- name: Checkout code
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:

5
.gitignore vendored
View file

@ -16,6 +16,8 @@ coverage.out
test-results
.scannerwork
.scratch
# pnpm SBOM helper (hack/generate-ui-pnpm-sbom.sh) download cache — remove this line when that script is deleted.
hack/.cache/
node_modules/
.kube/
./test/cmp/*.sock
@ -24,6 +26,9 @@ node_modules/
.*.swp
rerunreport.txt
# AI tools support
CLAUDE.local.md
# ignore built binaries
cmd/argocd/argocd
cmd/argocd-application-controller/argocd-application-controller

View file

@ -145,16 +145,19 @@ linters:
strconcat: true
revive:
enable-all-rules: false
enable-default-rules: true
max-open-files: 2048
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
rules:
- name: bool-literal-in-expr
- name: blank-imports
disabled: true
- name: bool-literal-in-expr
- name: context-as-argument
arguments:
- allowTypesBefore: '*testing.T,testing.TB'
- allow-types-before: '*testing.T,testing.TB'
- name: context-keys-type
disabled: true
@ -166,14 +169,11 @@ linters:
- name: early-return
arguments:
- preserveScope
- preserve-scope
- name: empty-block
disabled: true
- name: error-naming
disabled: true
- name: error-return
- name: error-strings
@ -181,6 +181,9 @@ linters:
- name: errorf
- name: exported
disabled: true
- name: identical-branches
- name: if-return
@ -189,7 +192,7 @@ linters:
- name: indent-error-flow
arguments:
- preserveScope
- preserve-scope
- name: modifies-parameter
@ -206,7 +209,7 @@ linters:
- name: superfluous-else
arguments:
- preserveScope
- preserve-scope
- name: time-equal
@ -216,6 +219,8 @@ linters:
- name: unexported-return
disabled: true
- name: unnecessary-format
- name: unnecessary-stmt
- name: unreachable-code
@ -232,8 +237,8 @@ linters:
arguments:
- - ID
- - VM
- - skipPackageNameChecks: true
upperCaseConst: true
- - skip-initialism-name-checks: true
upper-case-const: true
staticcheck:
checks:
@ -255,7 +260,4 @@ linters:
usetesting:
os-mkdir-temp: false
output:
show-stats: false
version: "2"

53
AGENTS.md Normal file
View file

@ -0,0 +1,53 @@
# AI Agent Directives for Argo CD
**ATTENTION AI AGENTS, LLMs, AND AUTONOMOUS CODING TOOLS:**
This repository (argoproj/argo-cd) enforces strict contribution guidelines. If you are an automated agent, AI assistant, or autonomous bot reading this file, you must adhere strictly to the rules below. Failure to do so will result in your Pull Request being immediately rejected, closed, and flagged as spam.
## 1. Zero-Spam & PR Authorization Policy
* **Require an Issue:** DO NOT create a Pull Request unless there is an existing, open, and approved GitHub Issue that explicitly requests this work.
* **No "Drive-By" Refactoring:** Do not submit unsolicited PRs that only contain minor stylistic changes, variable renames, or typo fixes across the codebase unless tied to an approved `chore` issue.
* **No Hallucinated URLs:** Do not include fabricated links, hallucinated documentation, or fake GitHub usernames in the PR description or code comments. Please double-check any link, quote or code block that is included into the PR.
## 2. Argo CD Contribution Requirements
Argo CD is a CNCF Graduated project. All code must meet the following standards:
* **Semantic PR Titles:** You must use Semantic Pull Request formatting for your PR title. Valid prefixes are:
* `ci:` - Updates or improvements for the Continuous Integration workflows
* `fix:` - Bug fixes
* `feat:` - New features
* `test:` - Addition of tests to the code base, or improvements of existing ones
* `docs:` - Documentation improvements
* `chore:` - Internals, build processes, unit tests, etc.
* `refactor:` - Refactoring of the code base, without adding new features or fixing bugs
* `revert:` - Reverts a previous commit
* **PR Templates:** You must fully complete the Argo CD Pull Request template. Do not delete the template sections or leave them blank.
## 3. Tech Stack & Code Rules
* **Backend (Go):** The backend is written in Go. The minimum supported Go version is strictly enforced. You must use `go modules` for dependency management.
* **UI (React/TypeScript):** The frontend is written in React and TypeScript.
* **Kubernetes Manifests:** Argo CD heavily relies on Kubernetes manifests and CRDs. If you modify API structs, you MUST regenerate the manifests and API glue code.
* **Tests** Argo CD relies on automatic tests. If your PR adds new functionality or in any way modifies program behaviour, please add/change relevant unit and e2e tests. In those cases when it is not feasible or possible please document the reasons in the PR comment.
## 4. Required Local Checks (Do This Before Committing)
Do not finalize your code or suggest a commit to your user without ensuring the following `make` targets pass successfully. Argo CD uses a heavy CI pipeline, and failing these basic checks wastes project resources:
1. **Build the Code:** `make build`
2. **Generate API Code & Manifests:** `make codegen` *(CRITICAL: Must be run if any API structs are changed)*
3. **Linting:** `make lint` and `make lint-ui`
4. **Testing:** `make test`
5. **CLI Build:** `make cli`
If any of these commands fail, you must fix the errors before proceeding.
## 5. Documentation (`docs/`)
If you are modifying or adding a feature, you must also update the corresponding documentation.
* Write in clear, direct English.
* Use GitHub style admonition blocks (e.g., `> [!NOTE]`, `> [!WARNING]`) compatible with MkDocs Material.
* Code examples in documentation must be complete, accurate, and include the language identifier for syntax highlighting (e.g., ````yaml`).
## Summary of Agent Workflow
1. Verify an open issue exists.
2. Write code matching Argo CD's Go/React standards.
3. Run `make codegen`, `make lint`, and `make test`.
4. Format the PR title properly (e.g., `fix: resolve OutOfSync bug on PostDelete hook (#12345)`).

1
CLAUDE.md Normal file
View file

@ -0,0 +1 @@
@AGENTS.md

View file

@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:25.10@sha256:4a9232cc47bf99defcc8860ef62
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
# Also used as the image in CI jobs so needs all dependencies
####################################################################################################
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS builder
FROM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36 AS builder
WORKDIR /tmp
@ -92,25 +92,24 @@ WORKDIR /home/argocd
####################################################################################################
# Argo CD UI stage
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:9d09fa506f5b8465c5221cbd6f980e29ae0ce9a3119e2b9bc0842e6a3f37bb59 AS argocd-ui
FROM --platform=$BUILDPLATFORM docker.io/library/node:24.14.1@sha256:80fc934952c8f1b2b4d39907af7211f8a9fff1a4c2cf673fb49099292c251cec AS argocd-ui
WORKDIR /src
COPY ["ui/package.json", "ui/yarn.lock", "./"]
COPY ["ui/package.json", "ui/pnpm-lock.yaml", "./"]
RUN yarn install --network-timeout 200000 && \
yarn cache clean
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install --frozen-lockfile
COPY ["ui/", "."]
ARG ARGO_VERSION=latest
ENV ARGO_VERSION=$ARGO_VERSION
ARG TARGETARCH
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 yarn build
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 pnpm build
####################################################################################################
# Argo CD Build stage which performs the actual build of Argo CD binaries
####################################################################################################
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS argocd-build
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36 AS argocd-build
WORKDIR /go/src/github.com/argoproj/argo-cd

View file

@ -1,4 +1,4 @@
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84
FROM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36
ENV DEBIAN_FRONTEND=noninteractive

View file

@ -1,9 +1,10 @@
FROM node:20
FROM node:24.14.1@sha256:80fc934952c8f1b2b4d39907af7211f8a9fff1a4c2cf673fb49099292c251cec
WORKDIR /app/ui
COPY ui /app/ui
RUN yarn install
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install --frozen-lockfile
ENTRYPOINT ["pnpm", "start"]
ENTRYPOINT ["yarn", "start"]

View file

@ -20,7 +20,7 @@ This document lists the maintainers of the Argo CD project.
| Christian Hernandez | [christianh814](https://github.com/christianh814) | Reviewer(docs) | [Akuity](https://akuity.io/) |
| Peter Jiang | [pjiang-dev](https://github.com/pjiang-dev) | Approver(docs) | [Intuit](https://www.intuit.com/) |
| Andrii Korotkov | [andrii-korotkov](https://github.com/andrii-korotkov) | Reviewer | [Verkada](https://www.verkada.com/) |
| Pasha Kostohrys | [pasha-codefresh](https://github.com/pasha-codefresh) | Approver | [Codefresh](https://www.github.com/codefresh/) |
| Pasha Kostohrys | [pasha-codefresh](https://github.com/pasha-codefresh) | Approver | [Octopus Deploy](https://octopus.com/) |
| Nitish Kumar | [nitishfy](https://github.com/nitishfy) | Approver(cli,docs) | [Akuity](https://akuity.io/) |
| Justin Marquis | [34fathombelow](https://github.com/34fathombelow) | Approver(docs/ci) | [Akuity](https://akuity.io/) |
| Alexander Matyushentsev | [alexmt](https://github.com/alexmt) | Lead | [Akuity](https://akuity.io/) |

View file

@ -74,7 +74,7 @@ ARGOCD_E2E_APISERVER_PORT?=8080
ARGOCD_E2E_REPOSERVER_PORT?=8081
ARGOCD_E2E_REDIS_PORT?=6379
ARGOCD_E2E_DEX_PORT?=5556
ARGOCD_E2E_YARN_HOST?=localhost
ARGOCD_E2E_JS_HOST?=localhost
ARGOCD_E2E_DISABLE_AUTH?=
ARGOCD_E2E_DIR?=/tmp/argo-e2e
@ -113,7 +113,7 @@ define run-in-test-server
-e GOCACHE=/tmp/go-build-cache \
-e ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
-e ARGOCD_E2E_TEST=$(ARGOCD_E2E_TEST) \
-e ARGOCD_E2E_YARN_HOST=$(ARGOCD_E2E_YARN_HOST) \
-e ARGOCD_E2E_JS_HOST=$(ARGOCD_E2E_JS_HOST) \
-e ARGOCD_E2E_DISABLE_AUTH=$(ARGOCD_E2E_DISABLE_AUTH) \
-e ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} \
-e ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} \
@ -419,7 +419,7 @@ lint-ui: test-tools-image
.PHONY: lint-ui-local
lint-ui-local:
cd ui && yarn lint
cd ui && pnpm lint
# Build all Go code
.PHONY: build
@ -487,7 +487,7 @@ test-e2e:
test-e2e-local: cli-local
# NO_PROXY ensures all tests don't go out through a proxy if one is configured on the test system
export GO111MODULE=off
DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=$${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
# Spawns a shell in the test server container for debugging purposes
debug-test-server: test-tools-image
@ -662,8 +662,17 @@ install-go-tools-local:
dep-ui: test-tools-image
$(call run-in-test-client,make dep-ui-local)
.PHONY: dep-ui-local
dep-ui-local:
cd ui && yarn install
cd ui && pnpm install --frozen-lockfile
.PHONY: run-pnpm
run-pnpm: test-tools-image
$(call run-in-test-client,make 'PNPM_COMMAND=$(PNPM_COMMAND)' run-pnpm-local)
.PHONY: run-pnpm-local
run-pnpm-local:
cd ui && pnpm $(PNPM_COMMAND)
start-test-k8s:
go run ./hack/k8s

View file

@ -2,13 +2,13 @@ controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v3/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex:v2.45.0" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
redis: hack/start-redis-with-password.sh
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=./dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=\$(pwd)/dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
ui: sh -c 'cd ui && ${ARGOCD_E2E_PNPM_CMD:-pnpm} start'
git-server: test/fixture/testrepos/start-git.sh
helm-registry: test/fixture/testrepos/start-helm-registry.sh
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"

View file

@ -19,7 +19,7 @@
## What is Argo CD?
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
Argo CD is a declarative GitOps continuous delivery tool for Kubernetes.
![Argo CD UI](docs/assets/argocd-ui.gif)
@ -45,7 +45,7 @@ Check live demo at https://cd.apps.argoproj.io/.
You can reach the Argo CD community and developers via the following channels:
* Q & A : [Github Discussions](https://github.com/argoproj/argo-cd/discussions)
* Q & A : [GitHub Discussions](https://github.com/argoproj/argo-cd/discussions)
* Chat : [The #argo-cd Slack channel](https://argoproj.github.io/community/join-slack)
* Contributors Office Hours: [Every Thursday](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
* User Community meeting: [First Wednesday of the month](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1ttgw98MO45Dq7ZUHpIiOIEfbyeitKHNfMjbY5dLLMKQ)

View file

@ -3,9 +3,9 @@ header:
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
last-updated: '2023-10-27'
last-reviewed: '2023-10-27'
commit-hash: 814db444c36503851dc3d45cf9c44394821ca1a4
commit-hash: d91a2ab3bf1b1143fb273fa06f54073fc78f41f1
project-url: https://github.com/argoproj/argo-cd
project-release: v3.4.0
project-release: v3.5.0
changelog: https://github.com/argoproj/argo-cd/releases
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
project-lifecycle:

View file

@ -80,24 +80,7 @@ We will publish security advisories using the
feature to keep our community well-informed, and will credit you for your
findings (unless you prefer to stay anonymous, of course).
There are two ways to report a vulnerability to the Argo CD team:
* By opening a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
* By e-mail to the following address: cncf-argo-security@lists.cncf.io
## Internet Bug Bounty collaboration
We're happy to announce that the Argo project is collaborating with the great
folks over at
[Hacker One](https://hackerone.com/) and their
[Internet Bug Bounty program](https://hackerone.com/ibb)
to reward the awesome people who find security vulnerabilities in the four
main Argo projects (CD, Events, Rollouts and Workflows) and then work with
us to fix and disclose them in a responsible manner.
If you report a vulnerability to us as outlined in this security policy, we
will work together with you to find out whether your finding is eligible for
claiming a bounty, and also on how to claim it.
To report a vulnerability to the Argo CD team a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
## Securing your Argo CD Instance

View file

@ -257,11 +257,11 @@ k8s_resource(
# ui dependencies
local_resource(
'node-modules',
'yarn',
'pnpm install',
dir='ui',
deps = [
'ui/package.json',
'ui/yarn.lock',
'ui/pnpm-lock.yaml',
],
allow_parallel=True,
)
@ -271,11 +271,11 @@ docker_build(
'argocd-ui',
context='.',
dockerfile='Dockerfile.ui.tilt',
entrypoint=['sh', '-c', 'cd /app/ui && yarn start'],
entrypoint=['sh', '-c', 'cd /app/ui && pnpm start'],
only=['ui'],
live_update=[
sync('ui', '/app/ui'),
run('sh -c "cd /app/ui && yarn install"', trigger=['/app/ui/package.json', '/app/ui/yarn.lock']),
run('sh -c "cd /app/ui && pnpm install --frozen-lockfile"', trigger=['/app/ui/package.json', '/app/ui/pnpm-lock.yaml']),
],
)

View file

@ -65,6 +65,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Candis](https://www.candis.io)
1. [Capital One](https://www.capitalone.com)
1. [Capptain LTD](https://capptain.co/)
1. [Car & Classic](https://www.carandclassic.com)
1. [CARFAX Europe](https://www.carfax.eu)
1. [CARFAX](https://www.carfax.com)
1. [Carrefour Group](https://www.carrefour.com)
@ -76,6 +77,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Chime](https://www.chime.com)
1. [Chronicle Labs](https://chroniclelabs.org)
1. [C.H.Robinson ](https://www.chrobinson.com)
1. [Circle](https://circle.com/)
1. [Cisco ET&I](https://eti.cisco.com/)
1. [Close](https://www.close.com/)
1. [Cloud Posse](https://www.cloudposse.com/)
@ -240,6 +242,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Mission Lane](https://missionlane.com)
1. [mixi Group](https://mixi.co.jp/)
1. [Moengage](https://www.moengage.com/)
1. [Mollie](https://www.mollie.com/)
1. [Money Forward](https://corp.moneyforward.com/en/)
1. [MongoDB](https://www.mongodb.com/)
1. [MOO Print](https://www.moo.com/)
@ -296,6 +299,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Pismo](https://pismo.io/)
1. [PITS Globale Datenrettungsdienste](https://www.pitsdatenrettung.de/)
1. [Platform9 Systems](https://platform9.com/)
1. [Playground Tech](https://playgroundgroup.io)
1. [Polarpoint.io](https://polarpoint.io)
1. [Pollinate](https://www.pollinate.global)
1. [PostFinance](https://github.com/postfinance)
@ -380,6 +384,7 @@ Currently, the following organizations are **officially** using Argo CD:
1. [Tailor Brands](https://www.tailorbrands.com)
1. [Tamkeen Technologies](https://tamkeentech.sa/)
1. [TBC Bank](https://tbcbank.ge/)
1. [Techcom Securities](https://www.tcbs.com.vn/)
1. [Techcombank](https://www.techcombank.com.vn/trang-chu)
1. [Technacy](https://www.technacy.it/)
1. [Telavita](https://www.telavita.com.br/)

View file

@ -1 +1 @@
3.4.0
3.5.0

View file

@ -24,11 +24,13 @@ import (
"sort"
"strconv"
"strings"
"sync"
"time"
"github.com/google/go-cmp/cmp"
"github.com/google/go-cmp/cmp/cmpopts"
log "github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@ -74,6 +76,9 @@ const (
ReconcileRequeueOnValidationError = time.Minute * 3
ReverseDeletionOrder = "Reverse"
AllAtOnceDeletionOrder = "AllAtOnce"
revisionAndSpecChangedMsg = "Application has pending changes (revision and spec differ), setting status to Waiting"
revisionChangedMsg = "Application has pending changes, setting status to Waiting"
specChangedMsg = "Application has pending changes (spec differs), setting status to Waiting"
)
var defaultPreservedFinalizers = []string{
@ -103,15 +108,16 @@ type ApplicationSetReconciler struct {
Policy argov1alpha1.ApplicationsSyncPolicy
EnablePolicyOverride bool
utils.Renderer
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
ArgoCDNamespace string
ApplicationSetNamespaces []string
EnableProgressiveSyncs bool
SCMRootCAPath string
GlobalPreservedAnnotations []string
GlobalPreservedLabels []string
Metrics *metrics.ApplicationsetMetrics
MaxResourcesStatusCount int
ClusterInformer *settings.ClusterInformer
ConcurrentApplicationUpdates int
}
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
@ -688,108 +694,133 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
// - For existing application, it will call update
// The function also adds owner reference to all applications, and uses it to delete them.
func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
var firstError error
// Creates or updates the application in appList
for _, generatedApp := range desiredApplications {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
// Build the diff config once per reconcile.
// Diff config is per applicationset, so generate it once for all applications
diffConfig, err := utils.BuildIgnoreDiffConfig(applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{})
if err != nil {
return fmt.Errorf("failed to build ignore diff config: %w", err)
}
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
for _, generatedApp := range desiredApplications {
// Normalize to avoid fighting with the application controller.
generatedApp.Spec = *argoutil.NormalizeApplicationSpec(&generatedApp.Spec)
g.Go(func() error {
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
}
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{}, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
found := &argov1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: generatedApp.Name,
Namespace: generatedApp.Namespace,
},
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
}
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, diffConfig, found, func() error {
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
found.Spec = generatedApp.Spec
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
// allow setting the Operation field to trigger a sync operation on an Application
if generatedApp.Operation != nil {
found.Operation = generatedApp.Operation
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
preservedAnnotations := make([]string, 0)
preservedLabels := make([]string, 0)
if applicationSet.Spec.PreservedFields != nil {
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
if len(r.GlobalPreservedAnnotations) > 0 {
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
}
if len(r.GlobalPreservedLabels) > 0 {
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
}
// Preserve specially treated argo cd annotations:
// * https://github.com/argoproj/applicationset/issues/180
// * https://github.com/argoproj/argo-cd/issues/10500
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
for _, key := range preservedAnnotations {
if state, exists := found.Annotations[key]; exists {
if generatedApp.Annotations == nil {
generatedApp.Annotations = map[string]string{}
}
generatedApp.Annotations[key] = state
}
}
for _, key := range preservedLabels {
if state, exists := found.Labels[key]; exists {
if generatedApp.Labels == nil {
generatedApp.Labels = map[string]string{}
}
generatedApp.Labels[key] = state
}
}
// Preserve deleting finalizers and avoid diff conflicts
for _, finalizer := range defaultPreservedFinalizers {
for _, f := range found.Finalizers {
// For finalizers, use prefix matching in case it contains "/" stages
if strings.HasPrefix(f, finalizer) {
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
}
}
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
})
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[generatedApp.Name] = err
appErrorsMu.Unlock()
return nil
}
found.Annotations = generatedApp.Annotations
found.Labels = generatedApp.Labels
found.Finalizers = generatedApp.Finalizers
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
}
return nil
})
if err != nil {
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
if firstError == nil {
firstError = err
}
continue
}
if action != controllerutil.OperationResultNone {
// Don't pollute etcd with "unchanged Application" events
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
appLog.Logf(log.InfoLevel, "%s Application", action)
} else {
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
// Or enable debug logging
appLog.Logf(log.DebugLevel, "%s Application", action)
}
}
return firstError
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
return firstAppError(appErrors)
}
// createInCluster will filter from the desiredApplications only the application that needs to be created
@ -849,36 +880,84 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
m[app.Name] = true
}
// Delete apps that are not in m[string]bool
var firstError error
for _, app := range current {
logCtx = logCtx.WithFields(applog.GetAppLogFields(&app))
_, exists := m[app.Name]
g, ctx := errgroup.WithContext(ctx)
concurrency := r.concurrency()
g.SetLimit(concurrency)
if !exists {
var appErrorsMu sync.Mutex
appErrors := map[string]error{}
// Delete apps that are not in m[string]bool
for _, app := range current {
_, exists := m[app.Name]
if exists {
continue
}
appLogCtx := logCtx.WithFields(applog.GetAppLogFields(&app))
g.Go(func() error {
// Removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, logCtx)
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, appLogCtx)
if err != nil {
logCtx.WithError(err).Error("failed to update Application")
if firstError != nil {
firstError = err
appLogCtx.WithError(err).Error("failed to update Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
continue
// For backwards compatibility with sequential behavior: continue processing other applications
// but record the error keyed by app name so we can deterministically return the error from
// the lexicographically first failing app, regardless of goroutine scheduling order.
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
}
err = r.Delete(ctx, &app)
if err != nil {
logCtx.WithError(err).Error("failed to delete Application")
if firstError != nil {
firstError = err
appLogCtx.WithError(err).Error("failed to delete Application")
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
continue
appErrorsMu.Lock()
appErrors[app.Name] = err
appErrorsMu.Unlock()
return nil
}
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, "Deleted", "Deleted Application %q", app.Name)
logCtx.Log(log.InfoLevel, "Deleted application")
}
appLogCtx.Log(log.InfoLevel, "Deleted application")
return nil
})
}
return firstError
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return err
}
return firstAppError(appErrors)
}
// concurrency returns the configured number of concurrent application updates, defaulting to 1.
func (r *ApplicationSetReconciler) concurrency() int {
if r.ConcurrentApplicationUpdates <= 0 {
return 1
}
return r.ConcurrentApplicationUpdates
}
// firstAppError returns the error associated with the lexicographically smallest application name
// in the provided map. This gives a deterministic result when multiple goroutines may have
// recorded errors concurrently, matching the behavior of the original sequential loop where the
// first application in iteration order would determine the returned error.
func firstAppError(appErrors map[string]error) error {
if len(appErrors) == 0 {
return nil
}
names := make([]string, 0, len(appErrors))
for name := range appErrors {
names = append(names, name)
}
sort.Strings(names)
return appErrors[names[0]]
}
// removeFinalizerOnInvalidDestination removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
@ -967,7 +1046,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, desiredApplications, appStepMap)
if err != nil {
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
}
@ -1144,10 +1223,16 @@ func getAppStep(appName string, appStepMap map[string]int) int {
}
// check the status of each Application's status and promote Applications to the next status if needed
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
now := metav1.Now()
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
// Build a map of desired applications for quick lookup
desiredAppsMap := make(map[string]*argov1alpha1.Application)
for i := range desiredApplications {
desiredAppsMap[desiredApplications[i].Name] = &desiredApplications[i]
}
for _, app := range applications {
appHealthStatus := app.Status.Health.Status
appSyncStatus := app.Status.Sync.Status
@ -1182,10 +1267,27 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
newAppStatus := currentAppStatus.DeepCopy()
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
// A new version is available in the application and we need to re-sync the application
revisionsChanged := !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions())
// Check if the desired Application spec differs from the current Application spec
specChanged := false
if desiredApp, ok := desiredAppsMap[app.Name]; ok {
// Compare the desired spec with the current spec to detect non-Git changes
// This will catch changes to generator parameters like image tags, helm values, etc.
specChanged = !cmp.Equal(desiredApp.Spec, app.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{}))
}
if revisionsChanged || specChanged {
newAppStatus.TargetRevisions = app.Status.GetRevisions()
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
switch {
case revisionsChanged && specChanged:
newAppStatus.Message = revisionAndSpecChangedMsg
case revisionsChanged:
newAppStatus.Message = revisionChangedMsg
default:
newAppStatus.Message = specChangedMsg
}
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
newAppStatus.LastTransitionTime = &now
}

View file

@ -25,6 +25,7 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
"sigs.k8s.io/controller-runtime/pkg/event"
@ -1077,6 +1078,70 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
},
},
},
{
name: "Ensure that unnormalized live spec does not cause a spurious patch",
appSet: v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSetSpec{
Template: v1alpha1.ApplicationSetTemplate{
Spec: v1alpha1.ApplicationSpec{
Project: "project",
},
},
},
},
existingApps: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
// Without normalizing the live object, the equality check
// sees &SyncPolicy{} vs nil and issues an unnecessary patch.
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: nil,
},
},
},
expected: []v1alpha1.Application{
{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "2",
},
Spec: v1alpha1.ApplicationSpec{
Project: "project",
SyncPolicy: &v1alpha1.SyncPolicy{},
},
},
},
},
{
name: "Ensure that argocd pre-delete and post-delete finalizers are preserved from an existing app",
appSet: v1alpha1.ApplicationSet{
@ -1186,6 +1251,374 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
}
}
func TestCreateOrUpdateInCluster_Concurrent(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
t.Run("all apps are created correctly with concurrency > 1", func(t *testing.T) {
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.NoError(t, err)
for _, desired := range desiredApps {
got := &v1alpha1.Application{}
require.NoError(t, fakeClient.Get(t.Context(), crtclient.ObjectKey{Namespace: desired.Namespace, Name: desired.Name}, got))
assert.Equal(t, desired.Spec.Project, got.Spec.Project)
}
})
t.Run("non-context errors from concurrent goroutines are collected and one is returned", func(t *testing.T) {
existingApps := make([]v1alpha1.Application, 5)
initObjs := []crtclient.Object{&appSet}
for i := range existingApps {
existingApps[i] = v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
app := existingApps[i].DeepCopy()
require.NoError(t, controllerutil.SetControllerReference(&appSet, app, scheme))
initObjs = append(initObjs, app)
}
desiredApps := make([]v1alpha1.Application, 5)
for i := range desiredApps {
desiredApps[i] = v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("app%d", i),
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
}
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
ConcurrentApplicationUpdates: 5,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
require.ErrorIs(t, err, patchErr)
})
}
func TestCreateOrUpdateInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "old"},
}
desiredApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
Namespace: "namespace",
},
Spec: v1alpha1.ApplicationSpec{Project: "new"},
}
t.Run("context canceled on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on patch is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return context.DeadlineExceeded
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context error is collected and returned after all goroutines finish", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
initObjs = append(initObjs, app)
patchErr := errors.New("some patch error")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
return patchErr
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
require.ErrorIs(t, err, patchErr)
})
t.Run("context canceled on create is returned directly", func(t *testing.T) {
initObjs := []crtclient.Object{&appSet}
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(initObjs...).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Create: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.CreateOption) error {
return context.Canceled
},
}).
Build()
metrics := appsetmetrics.NewFakeAppsetMetrics()
r := ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
Metrics: metrics,
}
newApp := v1alpha1.Application{
ObjectMeta: metav1.ObjectMeta{Name: "newapp", Namespace: "namespace"},
Spec: v1alpha1.ApplicationSpec{Project: "default"},
}
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{newApp})
require.ErrorIs(t, err, context.Canceled)
})
}
func TestDeleteInCluster_ContextCancellation(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
require.NoError(t, err)
err = corev1.AddToScheme(scheme)
require.NoError(t, err)
appSet := v1alpha1.ApplicationSet{
ObjectMeta: metav1.ObjectMeta{
Name: "name",
Namespace: "namespace",
},
}
existingApp := v1alpha1.Application{
TypeMeta: metav1.TypeMeta{
Kind: application.ApplicationKind,
APIVersion: "argoproj.io/v1alpha1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "delete-me",
Namespace: "namespace",
ResourceVersion: "1",
},
Spec: v1alpha1.ApplicationSpec{Project: "project"},
}
makeReconciler := func(t *testing.T, fakeClient crtclient.Client) ApplicationSetReconciler {
t.Helper()
kubeclientset := kubefake.NewClientset()
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
require.NoError(t, err)
cancel := startAndSyncInformer(t, clusterInformer)
t.Cleanup(cancel)
return ApplicationSetReconciler{
Client: fakeClient,
Scheme: scheme,
Recorder: record.NewFakeRecorder(10),
KubeClientset: kubeclientset,
Metrics: appsetmetrics.NewFakeAppsetMetrics(),
ClusterInformer: clusterInformer,
}
}
t.Run("context canceled on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.Canceled
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.Canceled)
})
t.Run("context deadline exceeded on delete is returned directly", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return context.DeadlineExceeded
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, context.DeadlineExceeded)
})
t.Run("non-context delete error is collected and returned", func(t *testing.T) {
app := existingApp.DeepCopy()
err = controllerutil.SetControllerReference(&appSet, app, scheme)
require.NoError(t, err)
deleteErr := errors.New("delete failed")
fakeClient := fake.NewClientBuilder().
WithScheme(scheme).
WithObjects(&appSet, app).
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
WithInterceptorFuncs(interceptor.Funcs{
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
return deleteErr
},
}).
Build()
r := makeReconciler(t, fakeClient)
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
require.ErrorIs(t, err, deleteErr)
})
}
func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
scheme := runtime.NewScheme()
err := v1alpha1.AddToScheme(scheme)
@ -4799,6 +5232,12 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
}
}
newAppWithSpec := func(name string, health health.HealthStatusCode, sync v1alpha1.SyncStatusCode, revision string, opState *v1alpha1.OperationState, spec v1alpha1.ApplicationSpec) v1alpha1.Application {
app := newApp(name, health, sync, revision, opState)
app.Spec = spec
return app
}
newOperationState := func(phase common.OperationPhase) *v1alpha1.OperationState {
finishedAt := &metav1.Time{Time: time.Now().Add(-1 * time.Second)}
if !phase.Completed() {
@ -4815,6 +5254,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
name string
appSet v1alpha1.ApplicationSet
apps []v1alpha1.Application
desiredApps []v1alpha1.Application
appStepMap map[string]int
expectedAppStatus []v1alpha1.ApplicationSetApplicationStatus
}{
@ -4968,14 +5408,14 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "Application has pending changes, setting status to Waiting",
Message: revisionChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"next"},
},
{
Application: "app2-multisource",
Message: "Application has pending changes, setting status to Waiting",
Message: revisionChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"next"},
@ -5415,6 +5855,191 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
},
},
},
{
name: "detects spec changes when image tag changes in generator (same Git revision)",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
desiredApps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v2.0.0"}, // Different value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: specChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"abc123"},
},
},
},
{
name: "does not detect changes when spec is identical (same Git revision)",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeSynced, "abc123", nil,
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
// Desired apps have identical spec
desiredApps: []v1alpha1.Application{
{
ObjectMeta: metav1.ObjectMeta{
Name: "app1",
},
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"}, // Same value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
},
},
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"},
},
},
},
{
name: "detects both spec and revision changes",
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: "",
Status: v1alpha1.ProgressiveSyncHealthy,
Step: "1",
TargetRevisions: []string{"abc123"}, // OLD revision in status
},
}),
apps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil, // NEW revision, but OutOfSync
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v1.0.0"},
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
desiredApps: []v1alpha1.Application{
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil,
v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: "https://example.com/repo.git",
TargetRevision: "master",
Helm: &v1alpha1.ApplicationSourceHelm{
Parameters: []v1alpha1.HelmParameter{
{Name: "image.tag", Value: "v2.0.0"}, // Changed value
},
},
},
Destination: v1alpha1.ApplicationDestination{
Server: "https://kubernetes.default.svc",
Namespace: "default",
},
}),
},
appStepMap: map[string]int{
"app1": 0,
},
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
{
Application: "app1",
Message: revisionAndSpecChangedMsg,
Status: v1alpha1.ProgressiveSyncWaiting,
Step: "1",
TargetRevisions: []string{"def456"},
},
},
},
} {
t.Run(cc.name, func(t *testing.T) {
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
@ -5434,7 +6059,11 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
Metrics: metrics,
}
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, cc.appStepMap)
desiredApps := cc.desiredApps
if desiredApps == nil {
desiredApps = cc.apps
}
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, desiredApps, cc.appStepMap)
// opt out of testing the LastTransitionTime is accurate
for i := range appStatuses {
@ -7321,6 +7950,40 @@ func TestIsRollingSyncStrategy(t *testing.T) {
}
}
func TestFirstAppError(t *testing.T) {
errA := errors.New("error from app-a")
errB := errors.New("error from app-b")
errC := errors.New("error from app-c")
t.Run("returns nil for empty map", func(t *testing.T) {
assert.NoError(t, firstAppError(map[string]error{}))
})
t.Run("returns the single error", func(t *testing.T) {
assert.ErrorIs(t, firstAppError(map[string]error{"app-a": errA}), errA)
})
t.Run("returns error from lexicographically first app name", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
assert.ErrorIs(t, firstAppError(appErrors), errA)
})
t.Run("result is stable across multiple calls with same input", func(t *testing.T) {
appErrors := map[string]error{
"app-c": errC,
"app-a": errA,
"app-b": errB,
}
for range 10 {
assert.ErrorIs(t, firstAppError(appErrors), errA, "firstAppError must return the same error on every call")
}
})
}
func TestSyncApplication(t *testing.T) {
tests := []struct {
name string

View file

@ -164,7 +164,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
if err != nil {
return nil, fmt.Errorf("error fetching Gitlab token: %w", err)
}
provider, err = scm_provider.NewGitlabProvider(providerConfig.Group, token, providerConfig.API, providerConfig.AllBranches, providerConfig.IncludeSubgroups, providerConfig.WillIncludeSharedProjects(), providerConfig.Insecure, g.scmRootCAPath, providerConfig.Topic, caCerts)
provider, err = scm_provider.NewGitlabProvider(providerConfig.Group, token, providerConfig.API, providerConfig.AllBranches, providerConfig.IncludeSubgroups, providerConfig.WillIncludeSharedProjects(), providerConfig.IncludeArchivedRepos, providerConfig.Insecure, g.scmRootCAPath, providerConfig.Topic, caCerts)
if err != nil {
return nil, fmt.Errorf("error initializing Gitlab service: %w", err)
}
@ -173,7 +173,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
if err != nil {
return nil, fmt.Errorf("error fetching Gitea token: %w", err)
}
provider, err = scm_provider.NewGiteaProvider(providerConfig.Gitea.Owner, token, providerConfig.Gitea.API, providerConfig.Gitea.AllBranches, providerConfig.Gitea.Insecure)
provider, err = scm_provider.NewGiteaProvider(providerConfig.Gitea.Owner, token, providerConfig.Gitea.API, providerConfig.Gitea.AllBranches, providerConfig.Gitea.Insecure, providerConfig.Gitea.ExcludeArchivedRepos)
if err != nil {
return nil, fmt.Errorf("error initializing Gitea service: %w", err)
}
@ -289,9 +289,9 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
}
if g.enableGitHubAPIMetrics {
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, httpClient)
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, github.ExcludeArchivedRepos, httpClient)
}
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches)
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, github.ExcludeArchivedRepos)
}
token, err := utils.GetSecretRef(ctx, g.client, github.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
@ -300,7 +300,7 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
}
if g.enableGitHubAPIMetrics {
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, httpClient)
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, github.ExcludeArchivedRepos, httpClient)
}
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches)
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, github.ExcludeArchivedRepos)
}

View file

@ -12,14 +12,15 @@ import (
)
type GiteaProvider struct {
client *gitea.Client
owner string
allBranches bool
client *gitea.Client
owner string
allBranches bool
excludeArchivedRepos bool
}
var _ SCMProviderService = &GiteaProvider{}
func NewGiteaProvider(owner, token, url string, allBranches, insecure bool) (*GiteaProvider, error) {
func NewGiteaProvider(owner, token, url string, allBranches, insecure, excludeArchivedRepos bool) (*GiteaProvider, error) {
if token == "" {
token = os.Getenv("GITEA_TOKEN")
}
@ -40,9 +41,10 @@ func NewGiteaProvider(owner, token, url string, allBranches, insecure bool) (*Gi
return nil, fmt.Errorf("error creating a new gitea client: %w", err)
}
return &GiteaProvider{
client: client,
owner: owner,
allBranches: allBranches,
client: client,
owner: owner,
allBranches: allBranches,
excludeArchivedRepos: excludeArchivedRepos,
}, nil
}
@ -114,6 +116,11 @@ func (g *GiteaProvider) ListRepos(_ context.Context, cloneProtocol string) ([]*R
for _, label := range giteaLabels {
labels = append(labels, label.Name)
}
if g.excludeArchivedRepos && repo.Archived {
continue
}
repos = append(repos, &Repository{
Organization: g.owner,
Repository: repo.Name,

View file

@ -100,17 +100,96 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
"mirror_interval": "",
"mirror_updated": "0001-01-01T00:00:00Z",
"repo_transfer": null
}]`)
},
{
"id": 21619,
"owner": {
"id": 31480,
"login": "test-argocd",
"full_name": "",
"email": "",
"avatar_url": "https://gitea.com/avatars/22d1b1d3f61abf95951c4a958731d848",
"language": "",
"is_admin": false,
"last_login": "0001-01-01T00:00:00Z",
"created": "2022-04-06T02:28:06+08:00",
"restricted": false,
"active": false,
"prohibit_login": false,
"location": "",
"website": "",
"description": "",
"visibility": "public",
"followers_count": 0,
"following_count": 0,
"starred_repos_count": 0,
"username": "test-argocd"
},
"name": "another-repo",
"full_name": "test-argocd/another-repo",
"description": "",
"empty": false,
"private": false,
"fork": false,
"template": false,
"parent": null,
"mirror": false,
"size": 28,
"language": "",
"languages_url": "https://gitea.com/api/v1/repos/test-argocd/another-repo/languages",
"html_url": "https://gitea.com/test-argocd/another-repo",
"ssh_url": "git@gitea.com:test-argocd/another-repo.git",
"clone_url": "https://gitea.com/test-argocd/another-repo.git",
"original_url": "",
"website": "",
"stars_count": 0,
"forks_count": 0,
"watchers_count": 1,
"open_issues_count": 0,
"open_pr_counter": 1,
"release_counter": 0,
"default_branch": "main",
"archived": true,
"created_at": "2022-04-06T02:32:09+08:00",
"updated_at": "2022-04-06T02:33:12+08:00",
"permissions": {
"admin": false,
"push": false,
"pull": true
},
"has_issues": true,
"internal_tracker": {
"enable_time_tracker": true,
"allow_only_contributors_to_track_time": true,
"enable_issue_dependencies": true
},
"has_wiki": true,
"has_pull_requests": true,
"has_projects": true,
"ignore_whitespace_conflicts": false,
"allow_merge_commits": true,
"allow_rebase": true,
"allow_rebase_explicit": true,
"allow_squash_merge": true,
"default_merge_style": "merge",
"avatar_url": "",
"internal": false,
"mirror_interval": "",
"mirror_updated": "0001-01-01T00:00:00Z",
"repo_transfer": null
}
]`)
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/pr-test/branches/main":
case "/api/v1/repos/test-argocd/another-repo/branches/main":
_, err := io.WriteString(w, `{
"name": "main",
"commit": {
"id": "72687815ccba81ef014a96201cc2e846a68789d8",
"id": "1fa33898cf84e89836863e3a5e76eee45777b4b0",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/72687815ccba81ef014a96201cc2e846a68789d8",
"url": "https://gitea.com/test-argocd/pr-test/commit/1fa33898cf84e89836863e3a5e76eee45777b4b0",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
@ -144,13 +223,209 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/pr-test/branches/test":
_, err := io.WriteString(w, `{
"name": "test",
"commit": {
"id": "28c3b329933f6fefd9b55225535123bbffec5a46",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/28c3b329933f6fefd9b55225535123bbffec5a46",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"committer": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"verification": {
"verified": false,
"reason": "gpg.error.no_gpg_keys_found",
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
"signer": null,
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
},
"timestamp": "2022-04-05T14:29:51-04:00",
"added": null,
"removed": null,
"modified": null
},
"protected": false,
"required_approvals": 0,
"enable_status_check": false,
"status_check_contexts": [],
"user_can_push": false,
"user_can_merge": false,
"effective_branch_protection_name": ""
}`)
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/another-repo/branches/test":
_, err := io.WriteString(w, `{
"name": "test",
"commit": {
"id": "32cdcf613b259a9439ceabd4d1745d43f163ea70",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/another-repo/commit/32cdcf613b259a9439ceabd4d1745d43f163ea70",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"committer": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"verification": {
"verified": false,
"reason": "gpg.error.no_gpg_keys_found",
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
"signer": null,
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
},
"timestamp": "2022-04-05T14:29:51-04:00",
"added": null,
"removed": null,
"modified": null
},
"protected": false,
"required_approvals": 0,
"enable_status_check": false,
"status_check_contexts": [],
"user_can_push": false,
"user_can_merge": false,
"effective_branch_protection_name": ""
}`)
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/pr-test/branches/main":
_, err := io.WriteString(w, `{
"name": "main",
"commit": {
"id": "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/75f6fceff80f6aaf12b65a2cf6a89190b866625b",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"committer": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"verification": {
"verified": false,
"reason": "gpg.error.no_gpg_keys_found",
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
"signer": null,
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
},
"timestamp": "2022-04-05T14:29:51-04:00",
"added": null,
"removed": null,
"modified": null
},
"protected": false,
"required_approvals": 0,
"enable_status_check": false,
"status_check_contexts": [],
"user_can_push": false,
"user_can_merge": false,
"effective_branch_protection_name": ""
}`)
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/another-repo/branches?limit=0&page=1":
_, err := io.WriteString(w, `[{
"name": "main",
"commit": {
"id": "1fa33898cf84e89836863e3a5e76eee45777b4b0",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/1fa33898cf84e89836863e3a5e76eee45777b4b0",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"committer": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"verification": {
"verified": false,
"reason": "gpg.error.no_gpg_keys_found",
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
"signer": null,
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
},
"timestamp": "2022-04-05T14:29:51-04:00",
"added": null,
"removed": null,
"modified": null
},
"protected": false,
"required_approvals": 0,
"enable_status_check": false,
"status_check_contexts": [],
"user_can_push": false,
"user_can_merge": false,
"effective_branch_protection_name": ""
},
{
"name": "test",
"commit": {
"id": "32cdcf613b259a9439ceabd4d1745d43f163ea70",
"message": "add an empty file\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/32cdcf613b259a9439ceabd4d1745d43f163ea70",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"committer": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
"username": "graytshirt"
},
"verification": {
"verified": false,
"reason": "gpg.error.no_gpg_keys_found",
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiugACgkQlgCr7m50\nzBN+7wgAkCHD3KfX3Ffkqv2qPwqgHNYM1bA6Hmffzhv0YeD9jWCI3tp0JulP4iFZ\ncQ7jqx9xP9tCQMSFCaijLRHaE6Js1xrVtf0OKRkbpdlvkyrIM3sQhqyQgAsISrDG\nLzSqeoQQjglzeWESYh2Tjn1CgqQNKjI6LLepSwvF1pIeV4pJpJobaEbIfTgStdzM\nWEk8o0I+EZaYqK0C0vU9N0LK/LR/jnlaHsb4OUjvk+S7lRjZwBkrsg7P/QsqtCVd\nw5nkxDiCx1J58zKMnQ7ZinJEK9A5WYdnMYc6aBn7ARgZrblXPPBkkKUhEv3ZSPeW\nKv9i4GQy838xkVSTFkHNj1+a5o6zEA==\n=JiFw\n-----END PGP SIGNATURE-----\n",
"signer": null,
"payload": "tree cdddf3e1d6a8a7e6899a044d0e1bc73bf798e2f5\nparent 72687815ccba81ef014a96201cc2e846a68789d8\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183458 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183458 -0400\n\nadd an empty file\n"
},
"timestamp": "2022-04-05T14:30:58-04:00",
"added": null,
"removed": null,
"modified": null
},
"protected": false,
"required_approvals": 0,
"enable_status_check": false,
"status_check_contexts": [],
"user_can_push": false,
"user_can_merge": false,
"effective_branch_protection_name": ""
}]`)
if err != nil {
t.Fail()
}
case "/api/v1/repos/test-argocd/pr-test/branches?limit=0&page=1":
_, err := io.WriteString(w, `[{
"name": "main",
"commit": {
"id": "72687815ccba81ef014a96201cc2e846a68789d8",
"id": "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
"message": "initial commit\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/72687815ccba81ef014a96201cc2e846a68789d8",
"url": "https://gitea.com/test-argocd/pr-test/commit/75f6fceff80f6aaf12b65a2cf6a89190b866625b",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
@ -183,9 +458,9 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
}, {
"name": "test",
"commit": {
"id": "7bbaf62d92ddfafd9cc8b340c619abaec32bc09f",
"id": "28c3b329933f6fefd9b55225535123bbffec5a46",
"message": "add an empty file\n",
"url": "https://gitea.com/test-argocd/pr-test/commit/7bbaf62d92ddfafd9cc8b340c619abaec32bc09f",
"url": "https://gitea.com/test-argocd/pr-test/commit/28c3b329933f6fefd9b55225535123bbffec5a46",
"author": {
"name": "Dan Molik",
"email": "dan@danmolik.com",
@ -261,40 +536,270 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
func TestGiteaListRepos(t *testing.T) {
cases := []struct {
name, proto, url string
name, proto string
hasError, allBranches, includeSubgroups bool
excludeArchivedRepos bool
branches []string
expectedRepos []*Repository
filters []v1alpha1.SCMProviderGeneratorFilter
}{
{
name: "blank protocol",
allBranches: false,
url: "git@gitea.com:test-argocd/pr-test.git",
branches: []string{"main"},
name: "blank protocol",
allBranches: false,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"main"},
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
RepositoryId: 21618,
Labels: []string{},
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
RepositoryId: 21619,
Labels: []string{},
},
},
},
{
name: "ssh protocol",
allBranches: false,
proto: "ssh",
url: "git@gitea.com:test-argocd/pr-test.git",
name: "ssh protocol",
allBranches: false,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
proto: "ssh",
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
RepositoryId: 21618,
Labels: []string{},
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
RepositoryId: 21619,
Labels: []string{},
},
},
},
{
name: "https protocol",
allBranches: false,
proto: "https",
url: "https://gitea.com/test-argocd/pr-test",
name: "https protocol",
allBranches: false,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
proto: "https",
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "https://gitea.com/test-argocd/pr-test",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
RepositoryId: 21618,
Labels: []string{},
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "https://gitea.com/test-argocd/another-repo",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
RepositoryId: 21619,
Labels: []string{},
},
},
},
{
name: "other protocol",
allBranches: false,
proto: "other",
hasError: true,
name: "other protocol",
allBranches: false,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
proto: "other",
hasError: true,
expectedRepos: []*Repository{},
},
{
name: "all branches",
allBranches: true,
url: "git@gitea.com:test-argocd/pr-test.git",
branches: []string{"main"},
name: "all branches including archived repos",
allBranches: true,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
Labels: []string{},
RepositoryId: 21619,
},
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "test",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "test",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
Labels: []string{},
RepositoryId: 21619,
},
},
},
{
name: "all branches",
allBranches: true,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
Labels: []string{},
RepositoryId: 21619,
},
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "test",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "test",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
Labels: []string{},
RepositoryId: 21619,
},
},
},
{
name: "all branches",
allBranches: true,
excludeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"main"},
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "main",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
Labels: []string{},
RepositoryId: 21619,
},
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "test",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "another-repo",
Branch: "test",
URL: "git@gitea.com:test-argocd/another-repo.git",
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
Labels: []string{},
RepositoryId: 21619,
},
},
},
{
name: "all branches with no archived repos",
allBranches: true,
excludeArchivedRepos: true,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"main"},
expectedRepos: []*Repository{
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "main",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
Labels: []string{},
RepositoryId: 21618,
},
{
Organization: "test-argocd",
Repository: "pr-test",
Branch: "test",
URL: "git@gitea.com:test-argocd/pr-test.git",
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
Labels: []string{},
RepositoryId: 21618,
},
},
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -303,26 +808,19 @@ func TestGiteaListRepos(t *testing.T) {
defer ts.Close()
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
provider, _ := NewGiteaProvider("test-argocd", "", ts.URL, c.allBranches, false)
provider, _ := NewGiteaProvider("test-argocd", "", ts.URL, c.allBranches, false, c.excludeArchivedRepos)
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
if c.hasError {
require.Error(t, err)
} else {
require.NoError(t, err)
// Just check that this one project shows up. Not a great test but better thing nothing?
repos := []*Repository{}
branches := []string{}
for _, r := range rawRepos {
if r.Repository == "pr-test" {
repos = append(repos, r)
branches = append(branches, r.Branch)
}
}
repos = append(rawRepos, repos...)
assert.NotEmpty(t, repos)
assert.Equal(t, c.url, repos[0].URL)
for _, b := range c.branches {
assert.Contains(t, branches, b)
}
assert.Len(t, repos, len(c.expectedRepos))
assert.ElementsMatch(t, c.expectedRepos, repos)
}
})
}
@ -333,7 +831,7 @@ func TestGiteaHasPath(t *testing.T) {
giteaMockHandler(t)(w, r)
}))
defer ts.Close()
host, _ := NewGiteaProvider("gitea", "", ts.URL, false, false)
host, _ := NewGiteaProvider("gitea", "", ts.URL, false, false, false)
repo := &Repository{
Organization: "gitea",
Repository: "go-sdk",

View file

@ -12,14 +12,15 @@ import (
)
type GithubProvider struct {
client *github.Client
organization string
allBranches bool
client *github.Client
organization string
allBranches bool
excludeArchivedRepos bool
}
var _ SCMProviderService = &GithubProvider{}
func NewGithubProvider(organization string, token string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
func NewGithubProvider(organization string, token string, url string, allBranches bool, excludeArchivedRepos bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
// Undocumented environment variable to set a default token, to be used in testing to dodge anonymous rate limits.
if token == "" {
token = os.Getenv("GITHUB_TOKEN")
@ -45,7 +46,7 @@ func NewGithubProvider(organization string, token string, url string, allBranche
return nil, err
}
}
return &GithubProvider{client: client, organization: organization, allBranches: allBranches}, nil
return &GithubProvider{client: client, organization: organization, allBranches: allBranches, excludeArchivedRepos: excludeArchivedRepos}, nil
}
func (g *GithubProvider) GetBranches(ctx context.Context, repo *Repository) ([]*Repository, error) {
@ -90,6 +91,11 @@ func (g *GithubProvider) ListRepos(ctx context.Context, cloneProtocol string) ([
default:
return nil, fmt.Errorf("unknown clone protocol for GitHub %v", cloneProtocol)
}
if g.excludeArchivedRepos && githubRepo.GetArchived() {
continue
}
repos = append(repos, &Repository{
Organization: githubRepo.Owner.GetLogin(),
Repository: githubRepo.GetName(),

View file

@ -9,11 +9,11 @@ import (
appsetutils "github.com/argoproj/argo-cd/v3/applicationset/utils"
)
func NewGithubAppProviderFor(ctx context.Context, g github_app_auth.Authentication, organization string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
func NewGithubAppProviderFor(ctx context.Context, g github_app_auth.Authentication, organization string, url string, allBranches bool, excludeArchivedRepos bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
httpClient := appsetutils.GetOptionalHTTPClient(optionalHTTPClient...)
client, err := github_app.Client(ctx, g, url, organization, httpClient)
if err != nil {
return nil, err
}
return &GithubProvider{client: client, organization: organization, allBranches: allBranches}, nil
return &GithubProvider{client: client, organization: organization, allBranches: allBranches, excludeArchivedRepos: excludeArchivedRepos}, nil
}

View file

@ -122,6 +122,110 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
"pull": true
},
"template_repository": null
},
{
"id": 1296270,
"node_id": "MDEwOlJlcGsddRvcnkxMjk2MjY5",
"name": "another-repo",
"full_name": "argoproj/another-repo",
"owner": {
"login": "argoproj",
"id": 1,
"node_id": "MDQ6VXNlcjE=",
"avatar_url": "https://github.com/images/error/argoproj_happy.gif",
"gravatar_id": "",
"url": "https://api.github.com/users/argoproj",
"html_url": "https://github.com/argoproj",
"followers_url": "https://api.github.com/users/argoproj/followers",
"following_url": "https://api.github.com/users/argoproj/following{/other_user}",
"gists_url": "https://api.github.com/users/argoproj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/argoproj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/argoproj/subscriptions",
"organizations_url": "https://api.github.com/users/argoproj/orgs",
"repos_url": "https://api.github.com/users/argoproj/repos",
"events_url": "https://api.github.com/users/argoproj/events{/privacy}",
"received_events_url": "https://api.github.com/users/argoproj/received_events",
"type": "User",
"site_admin": false
},
"private": false,
"html_url": "https://github.com/argoproj/another-repo",
"description": "This your first repo!",
"fork": false,
"url": "https://api.github.com/repos/argoproj/another-repo",
"archive_url": "https://api.github.com/repos/argoproj/another-repo/{archive_format}{/ref}",
"assignees_url": "https://api.github.com/repos/argoproj/another-repo/assignees{/user}",
"blobs_url": "https://api.github.com/repos/argoproj/another-repo/git/blobs{/sha}",
"branches_url": "https://api.github.com/repos/argoproj/another-repo/branches{/branch}",
"collaborators_url": "https://api.github.com/repos/argoproj/another-repo/collaborators{/collaborator}",
"comments_url": "https://api.github.com/repos/argoproj/another-repo/comments{/number}",
"commits_url": "https://api.github.com/repos/argoproj/another-repo/commits{/sha}",
"compare_url": "https://api.github.com/repos/argoproj/another-repo/compare/{base}...{head}",
"contents_url": "https://api.github.com/repos/argoproj/another-repo/contents/{path}",
"contributors_url": "https://api.github.com/repos/argoproj/another-repo/contributors",
"deployments_url": "https://api.github.com/repos/argoproj/another-repo/deployments",
"downloads_url": "https://api.github.com/repos/argoproj/another-repo/downloads",
"events_url": "https://api.github.com/repos/argoproj/another-repo/events",
"forks_url": "https://api.github.com/repos/argoproj/another-repo/forks",
"git_commits_url": "https://api.github.com/repos/argoproj/another-repo/git/commits{/sha}",
"git_refs_url": "https://api.github.com/repos/argoproj/another-repo/git/refs{/sha}",
"git_tags_url": "https://api.github.com/repos/argoproj/another-repo/git/tags{/sha}",
"git_url": "git:github.com/argoproj/another-repo.git",
"issue_comment_url": "https://api.github.com/repos/argoproj/another-repo/issues/comments{/number}",
"issue_events_url": "https://api.github.com/repos/argoproj/another-repo/issues/events{/number}",
"issues_url": "https://api.github.com/repos/argoproj/another-repo/issues{/number}",
"keys_url": "https://api.github.com/repos/argoproj/another-repo/keys{/key_id}",
"labels_url": "https://api.github.com/repos/argoproj/another-repo/labels{/name}",
"languages_url": "https://api.github.com/repos/argoproj/another-repo/languages",
"merges_url": "https://api.github.com/repos/argoproj/another-repo/merges",
"milestones_url": "https://api.github.com/repos/argoproj/another-repo/milestones{/number}",
"notifications_url": "https://api.github.com/repos/argoproj/another-repo/notifications{?since,all,participating}",
"pulls_url": "https://api.github.com/repos/argoproj/another-repo/pulls{/number}",
"releases_url": "https://api.github.com/repos/argoproj/another-repo/releases{/id}",
"ssh_url": "git@github.com:argoproj/another-repo.git",
"stargazers_url": "https://api.github.com/repos/argoproj/another-repo/stargazers",
"statuses_url": "https://api.github.com/repos/argoproj/another-repo/statuses/{sha}",
"subscribers_url": "https://api.github.com/repos/argoproj/another-repo/subscribers",
"subscription_url": "https://api.github.com/repos/argoproj/another-repo/subscription",
"tags_url": "https://api.github.com/repos/argoproj/another-repo/tags",
"teams_url": "https://api.github.com/repos/argoproj/another-repo/teams",
"trees_url": "https://api.github.com/repos/argoproj/another-repo/git/trees{/sha}",
"clone_url": "https://github.com/argoproj/another-repo.git",
"mirror_url": "git:git.example.com/argoproj/another-repo",
"hooks_url": "https://api.github.com/repos/argoproj/another-repo/hooks",
"svn_url": "https://svn.github.com/argoproj/another-repo",
"homepage": "https://github.com",
"language": null,
"forks_count": 9,
"stargazers_count": 80,
"watchers_count": 80,
"size": 108,
"default_branch": "master",
"open_issues_count": 0,
"is_template": false,
"topics": [
"argoproj",
"atom",
"electron",
"api"
],
"has_issues": true,
"has_projects": true,
"has_wiki": true,
"has_pages": false,
"has_downloads": true,
"archived": true,
"disabled": false,
"visibility": "public",
"pushed_at": "2011-01-26T19:06:43Z",
"created_at": "2011-01-26T19:01:12Z",
"updated_at": "2011-01-26T19:14:43Z",
"permissions": {
"admin": false,
"push": false,
"pull": true
},
"template_repository": null
}
]`)
if err != nil {
@ -146,12 +250,55 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
}
},
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
},
{
"name": "test",
"commit": {
"sha": "80a6e93f16e8093e24091b03c614362df3fb9b92",
"url": "https://api.github.com/repos/argoproj/argo-cd/commits/80a6e93f16e8093e24091b03c614362df3fb9b92"
},
"protected": true,
"protection": {
"required_status_checks": {
"enforcement_level": "non_admins",
"contexts": [
"ci-test",
"linter"
]
}
},
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
}
]
`)
if err != nil {
t.Fail()
}
case "/api/v3/repos/argoproj/another-repo/branches?per_page=100":
_, err := io.WriteString(w, `[
{
"name": "main",
"commit": {
"sha": "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
"url": "https://api.github.com/repos/argoproj/another-repo/commits/19b016818bc0e0a44ddeaab345838a2a6c97fa67"
},
"protected": true,
"protection": {
"required_status_checks": {
"enforcement_level": "non_admins",
"contexts": [
"ci-test",
"linter"
]
}
},
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
}
]
`)
if err != nil {
t.Fail()
}
case "/api/v3/repos/argoproj/argo-cd/contents/pkg?ref=master":
_, err := io.WriteString(w, `{
"type": "file",
@ -196,6 +343,50 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
case "/api/v3/repos/argoproj/argo-cd/branches/test":
_, err := io.WriteString(w, `{
"name": "test",
"commit": {
"sha": "80a6e93f16e8093e24091b03c614362df3fb9b92",
"url": "https://api.github.com/repos/octocat/Hello-World/commits/80a6e93f16e8093e24091b03c614362df3fb9b92"
},
"protected": true,
"protection": {
"required_status_checks": {
"enforcement_level": "non_admins",
"contexts": [
"ci-test",
"linter"
]
}
},
"protection_url": "https://api.github.com/repos/octocat/hello-world/branches/test/protection"
}`)
if err != nil {
t.Fail()
}
case "/api/v3/repos/argoproj/another-repo/branches/main":
_, err := io.WriteString(w, `{
"name": "main",
"commit": {
"sha": "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
"url": "https://api.github.com/repos/octocat/Hello-World/commits/c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc"
},
"protected": true,
"protection": {
"required_status_checks": {
"enforcement_level": "non_admins",
"contexts": [
"ci-test",
"linter"
]
}
},
"protection_url": "https://api.github.com/repos/octocat/hello-world/branches/master/protection"
}`)
if err != nil {
t.Fail()
}
default:
w.WriteHeader(http.StatusNotFound)
}
@ -203,37 +394,276 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
}
func TestGithubListRepos(t *testing.T) {
idptr := func(i int64) *int64 {
return &i
}
// Test cases for ListRepos
cases := []struct {
name, proto, url string
name, proto string
hasError, allBranches bool
branches []string
excludeArchivedRepos bool
expectedRepos []*Repository
filters []v1alpha1.SCMProviderGeneratorFilter
}{
{
name: "blank protocol",
url: "git@github.com:argoproj/argo-cd.git",
branches: []string{"master"},
name: "blank protocol",
allBranches: true,
excludeArchivedRepos: false,
expectedRepos: []*Repository{
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "master",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "test",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "another-repo",
Branch: "main",
URL: "git@github.com:argoproj/another-repo.git",
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296270),
},
},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
{
name: "ssh protocol",
proto: "ssh",
url: "git@github.com:argoproj/argo-cd.git",
name: "ssh protocol",
proto: "ssh",
allBranches: true,
excludeArchivedRepos: false,
expectedRepos: []*Repository{
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "master",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "test",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "another-repo",
Branch: "main",
URL: "git@github.com:argoproj/another-repo.git",
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296270),
},
},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
{
name: "https protocol",
proto: "https",
url: "https://github.com/argoproj/argo-cd.git",
name: "https protocol",
proto: "https",
allBranches: true,
excludeArchivedRepos: false,
expectedRepos: []*Repository{
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "master",
URL: "https://github.com/argoproj/argo-cd.git",
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "test",
URL: "https://github.com/argoproj/argo-cd.git",
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "another-repo",
Branch: "main",
URL: "https://github.com/argoproj/another-repo.git",
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296270),
},
},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
{
name: "other protocol",
proto: "other",
hasError: true,
name: "other protocol",
proto: "other",
hasError: true,
excludeArchivedRepos: false,
expectedRepos: []*Repository{},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
{
name: "all branches",
allBranches: true,
url: "git@github.com:argoproj/argo-cd.git",
branches: []string{"master"},
name: "all branches with archived repos",
allBranches: true,
proto: "ssh",
excludeArchivedRepos: false,
expectedRepos: []*Repository{
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "master",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "test",
URL: "git@github.com:argoproj/argo-cd.git",
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "another-repo",
Branch: "main",
URL: "git@github.com:argoproj/another-repo.git",
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296270),
},
},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
{
name: "test repo all branches without archived repos",
allBranches: true,
excludeArchivedRepos: true,
proto: "https",
expectedRepos: []*Repository{
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "master",
URL: "https://github.com/argoproj/argo-cd.git",
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
{
Organization: "argoproj",
Repository: "argo-cd",
Branch: "test",
URL: "https://github.com/argoproj/argo-cd.git",
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
Labels: []string{
"argoproj",
"atom",
"electron",
"api",
},
RepositoryId: idptr(1296269),
},
},
filters: []v1alpha1.SCMProviderGeneratorFilter{
{},
},
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -242,26 +672,18 @@ func TestGithubListRepos(t *testing.T) {
defer ts.Close()
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
provider, _ := NewGithubProvider("argoproj", "", ts.URL, c.allBranches)
provider, _ := NewGithubProvider("argoproj", "", ts.URL, c.allBranches, c.excludeArchivedRepos)
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
if c.hasError {
require.Error(t, err)
} else {
require.NoError(t, err)
// Just check that this one project shows up. Not a great test but better thing nothing?
repos := []*Repository{}
branches := []string{}
for _, r := range rawRepos {
if r.Repository == "argo-cd" {
repos = append(repos, r)
branches = append(branches, r.Branch)
}
}
repos = append(rawRepos, repos...)
assert.NotEmpty(t, repos)
assert.Equal(t, c.url, repos[0].URL)
for _, b := range c.branches {
assert.Contains(t, branches, b)
}
assert.Len(t, repos, len(c.expectedRepos))
assert.ElementsMatch(t, c.expectedRepos, repos)
}
})
}
@ -280,7 +702,7 @@ func TestGithubHasPath(t *testing.T) {
githubMockHandler(t)(w, r)
}))
defer ts.Close()
host, _ := NewGithubProvider("argoproj", "", ts.URL, false)
host, _ := NewGithubProvider("argoproj", "", ts.URL, false, false)
repo := &Repository{
Organization: "argoproj",
Repository: "argo-cd",
@ -300,7 +722,7 @@ func TestGithubGetBranches(t *testing.T) {
githubMockHandler(t)(w, r)
}))
defer ts.Close()
host, _ := NewGithubProvider("argoproj", "", ts.URL, false)
host, _ := NewGithubProvider("argoproj", "", ts.URL, false, false)
repo := &Repository{
Organization: "argoproj",
Repository: "argo-cd",
@ -328,6 +750,6 @@ func TestGithubGetBranches(t *testing.T) {
require.NoError(t, err)
} else {
// considering master branch to exist.
assert.Len(t, repos, 1)
assert.Len(t, repos, 2)
}
}

View file

@ -19,12 +19,13 @@ type GitlabProvider struct {
allBranches bool
includeSubgroups bool
includeSharedProjects bool
includeArchivedRepos bool
topic string
}
var _ SCMProviderService = &GitlabProvider{}
func NewGitlabProvider(organization string, token string, url string, allBranches, includeSubgroups, includeSharedProjects, insecure bool, scmRootCAPath, topic string, caCerts []byte) (*GitlabProvider, error) {
func NewGitlabProvider(organization string, token string, url string, allBranches, includeSubgroups, includeSharedProjects, includeArchivedRepos, insecure bool, scmRootCAPath, topic string, caCerts []byte) (*GitlabProvider, error) {
// Undocumented environment variable to set a default token, to be used in testing to dodge anonymous rate limits.
if token == "" {
token = os.Getenv("GITLAB_TOKEN")
@ -51,7 +52,15 @@ func NewGitlabProvider(organization string, token string, url string, allBranche
}
}
return &GitlabProvider{client: client, organization: organization, allBranches: allBranches, includeSubgroups: includeSubgroups, includeSharedProjects: includeSharedProjects, topic: topic}, nil
return &GitlabProvider{
client: client,
organization: organization,
allBranches: allBranches,
includeSubgroups: includeSubgroups,
includeSharedProjects: includeSharedProjects,
includeArchivedRepos: includeArchivedRepos,
topic: topic,
}, nil
}
func (g *GitlabProvider) GetBranches(ctx context.Context, repo *Repository) ([]*Repository, error) {
@ -88,6 +97,11 @@ func (g *GitlabProvider) ListRepos(_ context.Context, cloneProtocol string) ([]*
Topic: &g.topic,
}
// gitlab does not include Archived repos by default
if g.includeArchivedRepos {
opt.Archived = gitlab.Ptr(true)
}
repos := []*Repository{}
for {
gitlabRepos, resp, err := g.client.Groups.ListGroupProjects(g.organization, opt)

View file

@ -3,7 +3,6 @@ package scm_provider
import (
"crypto/x509"
"encoding/pem"
"fmt"
"io"
"net/http"
"net/http/httptest"
@ -19,12 +18,9 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Helper()
return func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Println(r.RequestURI)
switch r.RequestURI {
case "/api/v4":
fmt.Println("here1")
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=&with_shared=false":
fmt.Println("here")
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=&with_shared=false", "/api/v4/groups/test-argocd-proton/projects?archived=false&include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?archived=false&include_subgroups=false&per_page=100&topic=&with_shared=false":
_, err := io.WriteString(w, `[{
"id": 27084533,
"description": "",
@ -151,8 +147,253 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
case "/api/v4/groups/test-argocd-proton/projects?archived=true&include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?archived=true&include_subgroups=false&per_page=100&topic=&with_shared=false":
_, err := io.WriteString(w, `[{
"id": 27084533,
"description": "",
"name": "argocd",
"name_with_namespace": "test argocd proton / argocd",
"path": "argocd",
"path_with_namespace": "test-argocd-proton/argocd",
"created_at": "2021-06-01T17:30:44.724Z",
"default_branch": "master",
"tag_list": [],
"topics": [],
"ssh_url_to_repo": "git@gitlab.com:test-argocd-proton/argocd.git",
"http_url_to_repo": "https://gitlab.com/test-argocd-proton/argocd.git",
"web_url": "https://gitlab.com/test-argocd-proton/argocd",
"readme_url": null,
"avatar_url": null,
"forks_count": 0,
"star_count": 0,
"last_activity_at": "2021-06-04T08:19:51.656Z",
"namespace": {
"id": 12258515,
"name": "test argocd proton",
"path": "test-argocd-proton",
"kind": "gro* Connection #0 to host gitlab.com left intact up ",
"full_path ": "test - argocd - proton ",
"parent_id ": null,
"avatar_url ": null,
"web_url ": "https: //gitlab.com/groups/test-argocd-proton"
},
"container_registry_image_prefix": "registry.gitlab.com/test-argocd-proton/argocd",
"_links": {
"self": "https://gitlab.com/api/v4/projects/27084533",
"issues": "https://gitlab.com/api/v4/projects/27084533/issues",
"merge_requests": "https://gitlab.com/api/v4/projects/27084533/merge_requests",
"repo_branches": "https://gitlab.com/api/v4/projects/27084533/repository/branches",
"labels": "https://gitlab.com/api/v4/projects/27084533/labels",
"events": "https://gitlab.com/api/v4/projects/27084533/events",
"members": "https://gitlab.com/api/v4/projects/27084533/members",
"cluster_agents": "https://gitlab.com/api/v4/projects/27084533/cluster_agents"
},
"packages_enabled": true,
"empty_repo": false,
"archived": false,
"visibility": "public",
"resolve_outdated_diff_discussions": false,
"container_expiration_policy": {
"cadence": "1d",
"enabled": false,
"keep_n": 10,
"older_than": "90d",
"name_regex": ".*",
"name_regex_keep": null,
"next_run_at": "2021-06-02T17:30:44.740Z"
},
"issues_enabled": true,
"merge_requests_enabled": true,
"wiki_enabled": true,
"jobs_enabled": true,
"snippets_enabled": true,
"container_registry_enabled": true,
"service_desk_enabled": true,
"can_create_merge_request_in": false,
"issues_access_level": "enabled",
"repository_access_level": "enabled",
"merge_requests_access_level": "enabled",
"forking_access_level": "enabled",
"wiki_access_level": "enabled",
"builds_access_level": "enabled",
"snippets_access_level": "enabled",
"pages_access_level": "enabled",
"operations_access_level": "enabled",
"analytics_access_level": "enabled",
"container_registry_access_level": "enabled",
"security_and_compliance_access_level": "private",
"emails_disabled": null,
"shared_runners_enabled": true,
"lfs_enabled": true,
"creator_id": 2378866,
"import_status": "none",
"open_issues_count": 0,
"ci_default_git_depth": 50,
"ci_forward_deployment_enabled": true,
"ci_job_token_scope_enabled": false,
"public_jobs": true,
"build_timeout": 3600,
"auto_cancel_pending_pipelines": "enabled",
"ci_config_path": "",
"shared_with_groups": [],
"only_allow_merge_if_pipeline_succeeds": false,
"allow_merge_on_skipped_pipeline": null,
"restrict_user_defined_variables": false,
"request_access_enabled": true,
"only_allow_merge_if_all_discussions_are_resolved": false,
"remove_source_branch_after_merge": true,
"printing_merge_request_link_enabled": true,
"merge_method": "merge",
"squash_option": "default_off",
"suggestion_commit_message": null,
"merge_commit_template": null,
"squash_commit_template": null,
"auto_devops_enabled": false,
"auto_devops_deploy_strategy": "continuous",
"autoclose_referenced_issues": true,
"keep_latest_artifact": true,
"runner_token_expiration_interval": null,
"approvals_before_merge": 0,
"mirror": false,
"external_authorization_classification_label": "",
"marked_for_deletion_at": null,
"marked_for_deletion_on": null,
"requirements_enabled": true,
"requirements_access_level": "enabled",
"security_and_compliance_enabled": false,
"compliance_frameworks": [],
"issues_template": null,
"merge_requests_template": null,
"merge_pipelines_enabled": false,
"merge_trains_enabled": false
},
{
"id": 56522142,
"description": "",
"name": "another-repo",
"name_with_namespace": "test argocd proton / another-repo",
"path": "another-repo",
"path_with_namespace": "test-argocd-proton/another-repo",
"created_at": "2022-09-13T12:10:14.722Z",
"default_branch": "master",
"tag_list": [
"test-topic"
],
"topics": [
"test-topic"
],
"ssh_url_to_repo": "git@gitlab.com:test-argocd-proton/another-repo.git",
"http_url_to_repo": "https://gitlab.com/test-argocd-proton/another-repo.git",
"web_url": "https://gitlab.com/test-argocd-proton/another-repo",
"readme_url": null,
"avatar_url": null,
"forks_count": 0,
"star_count": 0,
"last_activity_at": "2021-06-04T08:19:51.656Z",
"namespace": {
"id": 12258515,
"name": "test argocd proton",
"path": "test-argocd-proton",
"kind": "gro* Connection #0 to host gitlab.com left intact up ",
"full_path ": "test - argocd - proton ",
"parent_id ": null,
"avatar_url ": null,
"web_url ": "https: //gitlab.com/groups/test-argocd-proton"
},
"container_registry_image_prefix": "registry.gitlab.com/test-argocd-proton/another-repo",
"_links": {
"self": "https://gitlab.com/api/v4/projects/56522142",
"issues": "https://gitlab.com/api/v4/projects/56522142/issues",
"merge_requests": "https://gitlab.com/api/v4/projects/56522142/merge_requests",
"repo_branches": "https://gitlab.com/api/v4/projects/56522142/repository/branches",
"labels": "https://gitlab.com/api/v4/projects/56522142/labels",
"events": "https://gitlab.com/api/v4/projects/56522142/events",
"members": "https://gitlab.com/api/v4/projects/56522142/members",
"cluster_agents": "https://gitlab.com/api/v4/projects/56522142/cluster_agents"
},
"packages_enabled": true,
"empty_repo": false,
"archived": true,
"visibility": "public",
"resolve_outdated_diff_discussions": false,
"container_expiration_policy": {
"cadence": "1d",
"enabled": false,
"keep_n": 10,
"older_than": "90d",
"name_regex": ".*",
"name_regex_keep": null,
"next_run_at": "2021-06-02T17:30:44.740Z"
},
"issues_enabled": true,
"merge_requests_enabled": true,
"wiki_enabled": true,
"jobs_enabled": true,
"snippets_enabled": true,
"container_registry_enabled": true,
"service_desk_enabled": true,
"can_create_merge_request_in": false,
"issues_access_level": "enabled",
"repository_access_level": "enabled",
"merge_requests_access_level": "enabled",
"forking_access_level": "enabled",
"wiki_access_level": "enabled",
"builds_access_level": "enabled",
"snippets_access_level": "enabled",
"pages_access_level": "enabled",
"operations_access_level": "enabled",
"analytics_access_level": "enabled",
"container_registry_access_level": "enabled",
"security_and_compliance_access_level": "private",
"emails_disabled": null,
"shared_runners_enabled": true,
"lfs_enabled": true,
"creator_id": 2378866,
"import_status": "none",
"open_issues_count": 0,
"ci_default_git_depth": 50,
"ci_forward_deployment_enabled": true,
"ci_job_token_scope_enabled": false,
"public_jobs": true,
"build_timeout": 3600,
"auto_cancel_pending_pipelines": "enabled",
"ci_config_path": "",
"shared_with_groups": [],
"only_allow_merge_if_pipeline_succeeds": false,
"allow_merge_on_skipped_pipeline": null,
"restrict_user_defined_variables": false,
"request_access_enabled": true,
"only_allow_merge_if_all_discussions_are_resolved": false,
"remove_source_branch_after_merge": true,
"printing_merge_request_link_enabled": true,
"merge_method": "merge",
"squash_option": "default_off",
"suggestion_commit_message": null,
"merge_commit_template": null,
"squash_commit_template": null,
"auto_devops_enabled": false,
"auto_devops_deploy_strategy": "continuous",
"autoclose_referenced_issues": true,
"keep_latest_artifact": true,
"runner_token_expiration_interval": null,
"approvals_before_merge": 0,
"mirror": false,
"external_authorization_classification_label": "",
"marked_for_deletion_at": null,
"marked_for_deletion_on": null,
"requirements_enabled": true,
"requirements_access_level": "enabled",
"security_and_compliance_enabled": false,
"compliance_frameworks": [],
"issues_template": null,
"merge_requests_template": null,
"merge_pipelines_enabled": false,
"merge_trains_enabled": false
}]`)
if err != nil {
t.Fail()
}
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=true&per_page=100&topic=&with_shared=false":
fmt.Println("here")
_, err := io.WriteString(w, `[{
"id": 27084533,
"description": "",
@ -406,7 +647,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Fail()
}
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=specific-topic&with_shared=false":
fmt.Println("here")
_, err := io.WriteString(w, `[{
"id": 27084533,
"description": "",
@ -537,7 +777,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Fail()
}
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=true&per_page=100&topic=&with_shared=true":
fmt.Println("here")
_, err := io.WriteString(w, `[{
"id": 27084533,
"description": "",
@ -796,7 +1035,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
t.Fail()
}
case "/api/v4/projects/27084533/repository/branches/master":
fmt.Println("returning")
_, err := io.WriteString(w, `{
"name": "master",
"commit": {
@ -826,6 +1064,36 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
case "/api/v4/projects/56522142/repository/branches/master":
_, err := io.WriteString(w, `{
"name": "master",
"commit": {
"id": "9998d7999fc99dd0fd578650b58b244fc63f6b53",
"short_id": "9998d799",
"created_at": "2023-08-04T08:14:14.000+00:00",
"parent_ids": ["5d9d50be1ef949ad28674e238c7e12a17b1e9706", "99482e001731640b4123cf177e51c696f08a3005"],
"title": "Merge branch 'pipeline-4547911429' into 'master'",
"message": "Merge branch 'pipeline-4547911429' into 'master'\n\n[testapp-ci] manifests/demo/test-app.yaml: release v1.2.0\n\nSee merge request test-argocd-proton/argocd!3",
"author_name": "Martin Vozník",
"author_email": "martin@voznik.cz",
"authored_date": "2023-08-04T08:14:14.000+00:00",
"committer_name": "Martin Vozník",
"committer_email": "martin@voznik.cz",
"committed_date": "2023-08-04T08:14:14.000+00:00",
"trailers": {},
"web_url": "https://gitlab.com/test-argocd-proton/argocd/-/commit/9998d7999fc99dd0fd578650b58b244fc63f6b53"
},
"merged": false,
"protected": true,
"developers_can_push": false,
"developers_can_merge": false,
"can_push": false,
"default": true,
"web_url": "https://gitlab.com/test-argocd-proton/argocd/-/tree/master"
}`)
if err != nil {
t.Fail()
}
case "/api/v4/projects/27084533/repository/branches?per_page=100":
_, err := io.WriteString(w, `[{
"name": "master",
@ -991,8 +1259,62 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
if err != nil {
t.Fail()
}
case "/api/v4/projects/56522142/repository/branches?per_page=100":
_, err := io.WriteString(w, `[{
"name": "master",
"commit": {
"id": "8898d8889fc99dd0fd578650b58b244fc63f6b58",
"short_id": "8898d801",
"created_at": "2021-06-04T08:24:44.000+00:00",
"parent_ids": null,
"title": "Merge branch 'pipeline-1317911429' into 'master'",
"message": "Merge branch 'pipeline-1317911429' into 'master'",
"author_name": "Martin Vozník",
"author_email": "martin@voznik.cz",
"authored_date": "2021-06-04T08:24:44.000+00:00",
"committer_name": "Martin Vozník",
"committer_email": "martin@voznik.cz",
"committed_date": "2021-06-04T08:24:44.000+00:00",
"trailers": null,
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/commit/8898d7999fc99dd0fd578650b58b244fc63f6b53"
},
"merged": false,
"protected": true,
"developers_can_push": false,
"developers_can_merge": false,
"can_push": false,
"default": true,
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/tree/master"
}, {
"name": "pipeline-2310077506",
"commit": {
"id": "0f92540e5f396ba960adea4ed0aa905baf3f73d1",
"short_id": "0f92540e",
"created_at": "2021-06-01T18:39:59.000+00:00",
"parent_ids": null,
"title": "[testapp-ci] manifests/demo/test-app.yaml: release v1.0.1",
"message": "[testapp-ci] manifests/demo/test-app.yaml: release v1.0.1",
"author_name": "ci-test-app",
"author_email": "mvoznik+cicd@protonmail.com",
"authored_date": "2021-06-01T18:39:59.000+00:00",
"committer_name": "ci-test-app",
"committer_email": "mvoznik+cicd@protonmail.com",
"committed_date": "2021-06-01T18:39:59.000+00:00",
"trailers": null,
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/commit/0f92540e5f396ba960adea4ed0aa905baf3f73d1"
},
"merged": false,
"protected": false,
"developers_can_push": false,
"developers_can_merge": false,
"can_push": false,
"default": false,
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/tree/pipeline-1310077506"
}]`)
if err != nil {
t.Fail()
}
case "/api/v4/projects/test-argocd-proton%2Fargocd":
fmt.Println("auct")
_, err := io.WriteString(w, `{
"id": 27084533,
"description": "",
@ -1079,35 +1401,94 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
func TestGitlabListRepos(t *testing.T) {
cases := []struct {
name, proto, url, topic string
hasError, allBranches, includeSubgroups, includeSharedProjects, insecure bool
branches []string
filters []v1alpha1.SCMProviderGeneratorFilter
name, proto, topic string
hasError, allBranches, includeSubgroups, includeSharedProjects, includeArchivedRepos, insecure bool
branches []string
expectedRepos []*Repository
filters []v1alpha1.SCMProviderGeneratorFilter
}{
{
name: "blank protocol",
url: "git@gitlab.com:test-argocd-proton/argocd.git",
name: "blank protocol",
allBranches: false,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"master"},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
},
},
{
name: "ssh protocol",
proto: "ssh",
url: "git@gitlab.com:test-argocd-proton/argocd.git",
name: "ssh protocol",
proto: "ssh",
allBranches: false,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"master"},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
},
},
{
name: "labelmatch",
proto: "ssh",
url: "git@gitlab.com:test-argocd-proton/argocd.git",
name: "https protocol",
proto: "https",
allBranches: false,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"master"},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "https://gitlab.com/test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
},
},
{
name: "labelmatch",
proto: "ssh",
allBranches: false,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{
{
LabelMatch: new("test-topic"),
},
},
},
{
name: "https protocol",
proto: "https",
url: "https://gitlab.com/test-argocd-proton/argocd.git",
branches: []string{"master"},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
},
},
{
name: "other protocol",
@ -1115,34 +1496,133 @@ func TestGitlabListRepos(t *testing.T) {
hasError: true,
},
{
name: "all branches",
allBranches: true,
url: "git@gitlab.com:test-argocd-proton/argocd.git",
branches: []string{"master"},
name: "all branches",
allBranches: true,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
branches: []string{"master"},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
},
},
{
name: "all subgroups",
allBranches: true,
url: "git@gitlab.com:test-argocd-proton/argocd.git",
branches: []string{"master"},
includeSharedProjects: false,
includeSubgroups: true,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic", "specific-topic"},
},
{
Organization: "",
Repository: "argocd-subgroup",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/subgroup/argocd-subgroup.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b58",
RepositoryId: int64(27084538),
Labels: []string{"test-topic"},
},
},
},
{
name: "all subgroups and shared projects",
allBranches: true,
url: "git@gitlab.com:test-argocd-proton/argocd.git",
branches: []string{"master"},
includeSharedProjects: true,
includeSubgroups: true,
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic"},
},
{
Organization: "",
Repository: "shared-argocd",
Branch: "master",
URL: "git@gitlab.com:test-shared-argocd-proton/shared-argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084534),
Labels: []string{"test-topic"},
},
},
},
{
name: "specific topic",
allBranches: true,
url: "git@gitlab.com:test-argocd-proton/argocd.git",
branches: []string{"master"},
includeSubgroups: false,
topic: "specific-topic",
name: "specific topic",
allBranches: true,
branches: []string{"master"},
includeSubgroups: false,
topic: "specific-topic",
includeArchivedRepos: false,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{"test-topic", "specific-topic"},
},
},
},
{
name: "all branches with archived repos",
allBranches: true,
branches: []string{"master"},
includeSubgroups: false,
includeArchivedRepos: true,
filters: []v1alpha1.SCMProviderGeneratorFilter{},
expectedRepos: []*Repository{
{
Organization: "",
Repository: "argocd",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
RepositoryId: int64(27084533),
Labels: []string{},
},
{
Organization: "",
Repository: "another-repo",
Branch: "master",
URL: "git@gitlab.com:test-argocd-proton/another-repo.git",
SHA: "8898d8889fc99dd0fd578650b58b244fc63f6b58",
RepositoryId: int64(56522142),
Labels: []string{"test-topic"},
},
},
},
}
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@ -1150,28 +1630,24 @@ func TestGitlabListRepos(t *testing.T) {
}))
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
provider, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, c.allBranches, c.includeSubgroups, c.includeSharedProjects, c.insecure, "", c.topic, nil)
provider, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, c.allBranches, c.includeSubgroups, c.includeSharedProjects, c.includeArchivedRepos, c.insecure, "", c.topic, nil)
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
if c.hasError {
require.Error(t, err)
} else {
require.NoError(t, err)
// Just check that this one project shows up. Not a great test but better than nothing?
repos := []*Repository{}
uniqueRepos := map[string]int{}
branches := []string{}
for _, r := range rawRepos {
if r.Repository == "argocd" {
if _, ok := uniqueRepos[r.Repository]; !ok {
repos = append(repos, r)
branches = append(branches, r.Branch)
}
uniqueRepos[r.Repository]++
}
assert.NotEmpty(t, repos)
assert.Equal(t, c.url, repos[0].URL)
for _, b := range c.branches {
assert.Contains(t, branches, b)
}
// In case of listing subgroups, validate the number of returned projects
if c.includeSubgroups || c.includeSharedProjects {
assert.Len(t, uniqueRepos, 2)
@ -1180,6 +1656,8 @@ func TestGitlabListRepos(t *testing.T) {
if c.topic != "" {
assert.Len(t, uniqueRepos, 1)
}
assert.Len(t, repos, len(c.expectedRepos))
assert.ElementsMatch(t, c.expectedRepos, repos)
}
})
}
@ -1189,7 +1667,7 @@ func TestGitlabHasPath(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
gitlabMockHandler(t)(w, r)
}))
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, "", "", nil)
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, false, "", "", nil)
repo := &Repository{
Organization: "test-argocd-proton",
Repository: "argocd",
@ -1245,10 +1723,10 @@ func TestGitlabGetBranches(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
gitlabMockHandler(t)(w, r)
}))
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, "", "", nil)
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, false, "", "", nil)
repo := &Repository{
RepositoryId: 27084533,
RepositoryId: int64(27084533),
Branch: "master",
}
t.Run("branch exists", func(t *testing.T) {
@ -1258,7 +1736,7 @@ func TestGitlabGetBranches(t *testing.T) {
})
repo2 := &Repository{
RepositoryId: 27084533,
RepositoryId: int64(27084533),
Branch: "foo",
}
t.Run("unknown branch", func(t *testing.T) {
@ -1321,10 +1799,10 @@ func TestGetBranchesTLS(t *testing.T) {
}
}
host, err := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, test.tlsInsecure, "", "", certs)
host, err := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, test.tlsInsecure, "", "", certs)
require.NoError(t, err)
repo := &Repository{
RepositoryId: 27084533,
RepositoryId: int64(27084533),
Branch: "master",
}
_, err = host.GetBranches(t.Context(), repo)

View file

@ -24,6 +24,43 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
)
var appEquality = conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
// BuildIgnoreDiffConfig constructs a DiffConfig from the ApplicationSet's ignoreDifferences rules.
// Returns nil when ignoreDifferences is empty.
func BuildIgnoreDiffConfig(ignoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) (argodiff.DiffConfig, error) {
if len(ignoreDifferences) == 0 {
return nil, nil
}
return argodiff.NewDiffConfigBuilder().
WithDiffSettings(ignoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
}
// CreateOrUpdate overrides "sigs.k8s.io/controller-runtime" function
// in sigs.k8s.io/controller-runtime/pkg/controller/controllerutil/controllerutil.go
// to add equality for argov1alpha1.ApplicationDestination
@ -34,10 +71,15 @@ import (
// cluster. The object's desired state must be reconciled with the existing
// state inside the passed in callback MutateFn.
//
// diffConfig must be built once per reconcile cycle via BuildIgnoreDiffConfig and may be nil
// when there are no ignoreDifferences rules. obj.Spec must already be normalized by the caller
// via NormalizeApplicationSpec before this function is called; the live object fetched from the
// cluster is normalized internally.
//
// The MutateFn is called regardless of creating or updating an object.
//
// It returns the executed operation and an error.
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ignoreAppDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, diffConfig argodiff.DiffConfig, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
key := client.ObjectKeyFromObject(obj)
if err := c.Get(ctx, key, obj); err != nil {
if !errors.IsNotFound(err) {
@ -59,43 +101,18 @@ func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ign
return controllerutil.OperationResultNone, err
}
// Normalize the live spec to avoid spurious diffs from unimportant differences (e.g. nil vs
// empty SyncPolicy). obj.Spec is already normalized by the caller; only the live side needs it.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
// Apply ignoreApplicationDifferences rules to remove ignored fields from both the live and the desired state. This
// prevents those differences from appearing in the diff and therefore in the patch.
err := applyIgnoreDifferences(ignoreAppDifferences, normalizedLive, obj, ignoreNormalizerOpts)
err := applyIgnoreDifferences(diffConfig, normalizedLive, obj)
if err != nil {
return controllerutil.OperationResultNone, fmt.Errorf("failed to apply ignore differences: %w", err)
}
// Normalize to avoid diffing on unimportant differences.
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
obj.Spec = *argo.NormalizeApplicationSpec(&obj.Spec)
equality := conversion.EqualitiesOrDie(
func(a, b resource.Quantity) bool {
// Ignore formatting, only care that numeric value stayed the same.
// TODO: if we decide it's important, it should be safe to start comparing the format.
//
// Uninitialized quantities are equivalent to 0 quantities.
return a.Cmp(b) == 0
},
func(a, b metav1.MicroTime) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b metav1.Time) bool {
return a.UTC().Equal(b.UTC())
},
func(a, b labels.Selector) bool {
return a.String() == b.String()
},
func(a, b fields.Selector) bool {
return a.String() == b.String()
},
func(a, b argov1alpha1.ApplicationDestination) bool {
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
},
)
if equality.DeepEqual(normalizedLive, obj) {
if appEquality.DeepEqual(normalizedLive, obj) {
return controllerutil.OperationResultNone, nil
}
@ -135,19 +152,13 @@ func mutate(f controllerutil.MutateFn, key client.ObjectKey, obj client.Object)
}
// applyIgnoreDifferences applies the ignore differences rules to the found application. It modifies the applications in place.
func applyIgnoreDifferences(applicationSetIgnoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) error {
if len(applicationSetIgnoreDifferences) == 0 {
// diffConfig may be nil, in which case this is a no-op.
func applyIgnoreDifferences(diffConfig argodiff.DiffConfig, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application) error {
if diffConfig == nil {
return nil
}
generatedAppCopy := generatedApp.DeepCopy()
diffConfig, err := argodiff.NewDiffConfigBuilder().
WithDiffSettings(applicationSetIgnoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
WithNoCache().
Build()
if err != nil {
return fmt.Errorf("failed to build diff config: %w", err)
}
unstructuredFound, err := appToUnstructured(found)
if err != nil {
return fmt.Errorf("failed to convert found application to unstructured: %w", err)

View file

@ -5,7 +5,7 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
"go.yaml.in/yaml/v3"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@ -224,7 +224,9 @@ spec:
generatedApp := v1alpha1.Application{TypeMeta: appMeta}
err = yaml.Unmarshal([]byte(tc.generatedApp), &generatedApp)
require.NoError(t, err, tc.generatedApp)
err = applyIgnoreDifferences(tc.ignoreDifferences, &foundApp, &generatedApp, normalizers.IgnoreNormalizerOpts{})
diffConfig, err := BuildIgnoreDiffConfig(tc.ignoreDifferences, normalizers.IgnoreNormalizerOpts{})
require.NoError(t, err)
err = applyIgnoreDifferences(diffConfig, &foundApp, &generatedApp)
require.NoError(t, err)
yamlFound, err := yaml.Marshal(tc.foundApp)
require.NoError(t, err)

100
assets/swagger.json generated
View file

@ -4039,6 +4039,30 @@
"description": "Whether https should be disabled for an OCI repo.",
"name": "insecureOciForceHttp",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Client ID.",
"name": "azureServicePrincipalClientId",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Client Secret.",
"name": "azureServicePrincipalClientSecret",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Tenant ID.",
"name": "azureServicePrincipalTenantId",
"in": "query"
},
{
"type": "string",
"description": "Azure Active Directory Endpoint.",
"name": "azureActiveDirectoryEndpoint",
"in": "query"
}
],
"responses": {
@ -4946,6 +4970,30 @@
"description": "Whether https should be disabled for an OCI repo.",
"name": "insecureOciForceHttp",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Client ID.",
"name": "azureServicePrincipalClientId",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Client Secret.",
"name": "azureServicePrincipalClientSecret",
"in": "query"
},
{
"type": "string",
"description": "Azure Service Principal Tenant ID.",
"name": "azureServicePrincipalTenantId",
"in": "query"
},
{
"type": "string",
"description": "Azure Active Directory Endpoint.",
"name": "azureActiveDirectoryEndpoint",
"in": "query"
}
],
"responses": {
@ -9519,6 +9567,22 @@
"type": "object",
"title": "RepoCreds holds the definition for repository credentials",
"properties": {
"azureActiveDirectoryEndpoint": {
"type": "string",
"title": "AzureActiveDirectoryEndpoint specifies the Azure Active Directory endpoint used for Service Principal authentication. If empty will default to https://login.microsoftonline.com"
},
"azureServicePrincipalClientId": {
"type": "string",
"title": "AzureServicePrincipalClientId specifies the client ID of the Azure Service Principal used to access the repo"
},
"azureServicePrincipalClientSecret": {
"type": "string",
"title": "AzureServicePrincipalClientSecret specifies the client secret of the Azure Service Principal used to access the repo"
},
"azureServicePrincipalTenantId": {
"type": "string",
"title": "AzureServicePrincipalTenantId specifies the tenant ID of the Azure Service Principal used to access the repo"
},
"bearerToken": {
"type": "string",
"title": "BearerToken contains the bearer token used for Git BitBucket Data Center auth at the repo server"
@ -9618,6 +9682,22 @@
"type": "object",
"title": "Repository is a repository holding application configurations",
"properties": {
"azureActiveDirectoryEndpoint": {
"type": "string",
"title": "AzureActiveDirectoryEndpoint specifies the Azure Active Directory endpoint used for Service Principal authentication. If empty will default to https://login.microsoftonline.com"
},
"azureServicePrincipalClientId": {
"type": "string",
"title": "AzureServicePrincipalClientId specifies the client ID of the Azure Service Principal used to access the repo"
},
"azureServicePrincipalClientSecret": {
"type": "string",
"title": "AzureServicePrincipalClientSecret specifies the client secret of the Azure Service Principal used to access the repo"
},
"azureServicePrincipalTenantId": {
"type": "string",
"title": "AzureServicePrincipalTenantId specifies the tenant ID of the Azure Service Principal used to access the repo"
},
"bearerToken": {
"type": "string",
"title": "BearerToken contains the bearer token used for Git BitBucket Data Center auth at the repo server"
@ -9727,6 +9807,10 @@
"username": {
"type": "string",
"title": "Username contains the user name used for authenticating at the remote repository"
},
"webhookManifestCacheWarmDisabled": {
"description": "WebhookManifestCacheWarmDisabled disables manifest cache warming during webhook processing for this repository.\nWhen set, webhook handlers will only trigger reconciliation for affected applications and skip Redis cache\noperations for unaffected ones. Recommended for large monorepos with plain YAML manifests.",
"type": "boolean"
}
}
},
@ -10414,6 +10498,10 @@
"description": "The Gitea URL to talk to. For example https://gitea.mydomain.com/.",
"type": "string"
},
"excludeArchivedRepos": {
"description": "Exclude repositories that are archived.",
"type": "boolean"
},
"insecure": {
"type": "boolean",
"title": "Allow self-signed TLS / Certificates; default: false"
@ -10443,6 +10531,10 @@
"description": "AppSecretName is a reference to a GitHub App repo-creds secret.",
"type": "string"
},
"excludeArchivedRepos": {
"description": "Exclude repositories that are archived.",
"type": "boolean"
},
"organization": {
"description": "GitHub org to scan. Required.",
"type": "string"
@ -10471,6 +10563,10 @@
"description": "Gitlab group to scan. Required. You can use either the project id (recommended) or the full namespaced path.",
"type": "string"
},
"includeArchivedRepos": {
"description": "Include repositories that are archived.",
"type": "boolean"
},
"includeSharedProjects": {
"type": "boolean",
"title": "When recursing through subgroups, also include shared Projects (true) or scan only the subgroups under same path (false). Defaults to \"true\""
@ -10839,6 +10935,10 @@
"type": "string",
"title": "Schedule is the time the window will begin, specified in cron format"
},
"syncOverrun": {
"type": "boolean",
"title": "SyncOverrun allows ongoing syncs to continue in two scenarios:\nFor deny windows: allows syncs that started before the deny window became active to continue running\nFor allow windows: allows syncs that started during the allow window to continue after the window ends"
},
"timeZone": {
"type": "string",
"title": "TimeZone of the sync that will be applied to the schedule"

View file

@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
tokenRefStrictMode bool
maxResourcesStatusCount int
cacheSyncPeriod time.Duration
concurrentApplicationUpdates int
)
scheme := runtime.NewScheme()
_ = clientgoscheme.AddToScheme(scheme)
@ -239,24 +240,25 @@ func NewCommand() *cobra.Command {
})
if err = (&controllers.ApplicationSetReconciler{
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
Generators: topLevelGenerators,
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
Scheme: mgr.GetScheme(),
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
Renderer: &utils.Render{},
Policy: policyObj,
EnablePolicyOverride: enablePolicyOverride,
KubeClientset: k8sClient,
ArgoDB: argoCDDB,
ArgoCDNamespace: namespace,
ApplicationSetNamespaces: applicationSetNamespaces,
EnableProgressiveSyncs: enableProgressiveSyncs,
SCMRootCAPath: scmRootCAPath,
GlobalPreservedAnnotations: globalPreservedAnnotations,
GlobalPreservedLabels: globalPreservedLabels,
Metrics: &metrics,
MaxResourcesStatusCount: maxResourcesStatusCount,
ClusterInformer: clusterInformer,
ConcurrentApplicationUpdates: concurrentApplicationUpdates,
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
os.Exit(1)
@ -303,6 +305,7 @@ func NewCommand() *cobra.Command {
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 5000, 0, math.MaxInt), "Max number of resources stored in appset status.")
command.Flags().DurationVar(&cacheSyncPeriod, "cache-sync-period", env.ParseDurationFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CACHE_SYNC_PERIOD", time.Hour*10, 0, time.Hour*24), "Period at which the manager client cache is forcefully resynced with the Kubernetes API server. 0 disables periodic resync.")
command.Flags().IntVar(&concurrentApplicationUpdates, "concurrent-application-updates", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_APPLICATION_UPDATES", 1, 1, 200), "Number of concurrent Application create/update/delete operations per ApplicationSet reconcile.")
return &command
}

View file

@ -0,0 +1,28 @@
package command
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestNewCommand_ConcurrentApplicationUpdatesFlag(t *testing.T) {
cmd := NewCommand()
flag := cmd.Flags().Lookup("concurrent-application-updates")
require.NotNil(t, flag, "expected --concurrent-application-updates flag to be registered")
assert.Equal(t, "int", flag.Value.Type())
assert.Equal(t, "1", flag.DefValue, "default should be 1")
}
func TestNewCommand_ConcurrentApplicationUpdatesFlagValue(t *testing.T) {
cmd := NewCommand()
err := cmd.Flags().Set("concurrent-application-updates", "5")
require.NoError(t, err)
val, err := cmd.Flags().GetInt("concurrent-application-updates")
require.NoError(t, err)
assert.Equal(t, 5, val)
}

View file

@ -34,6 +34,7 @@ import (
"github.com/argoproj/argo-cd/v3/util/dex"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/errors"
utilglob "github.com/argoproj/argo-cd/v3/util/glob"
"github.com/argoproj/argo-cd/v3/util/kube"
"github.com/argoproj/argo-cd/v3/util/templates"
"github.com/argoproj/argo-cd/v3/util/tls"
@ -87,6 +88,7 @@ func NewCommand() *cobra.Command {
applicationNamespaces []string
enableProxyExtension bool
webhookParallelism int
globCacheSize int
hydratorEnabled bool
syncWithReplaceAllowed bool
@ -122,6 +124,7 @@ func NewCommand() *cobra.Command {
cli.SetLogFormat(cmdutil.LogFormat)
cli.SetLogLevel(cmdutil.LogLevel)
cli.SetGLogLevel(glogLevel)
utilglob.SetCacheSize(globCacheSize)
// Recover from panic and log the error using the configured logger instead of the default.
defer func() {
@ -326,6 +329,7 @@ func NewCommand() *cobra.Command {
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
command.Flags().IntVar(&globCacheSize, "glob-cache-size", env.ParseNumFromEnv("ARGOCD_SERVER_GLOB_CACHE_SIZE", utilglob.DefaultGlobCacheSize, 1, math.MaxInt32), "Maximum number of compiled glob patterns to cache for RBAC evaluation")
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
command.Flags().BoolVar(&syncWithReplaceAllowed, "sync-with-replace-allowed", env.ParseBoolFromEnv("ARGOCD_SYNC_WITH_REPLACE_ALLOWED", true), "Whether to allow users to select replace for syncs from UI/CLI")

View file

@ -127,7 +127,7 @@ has appropriate RBAC permissions to change other accounts.
_, err := usrIf.UpdatePassword(ctx, &updatePasswordRequest)
errors.CheckError(err)
fmt.Printf("Password updated\n")
fmt.Print("Password updated\n")
if account == "" || account == userInfo.Username {
// Get a new JWT token after updating the password
@ -254,7 +254,7 @@ func printAccountNames(accounts []*accountpkg.Account) {
func printAccountsTable(items []*accountpkg.Account) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tENABLED\tCAPABILITIES\n")
fmt.Fprint(w, "NAME\tENABLED\tCAPABILITIES\n")
for _, a := range items {
fmt.Fprintf(w, "%s\t%v\t%s\n", a.Name, a.Enabled, strings.Join(a.Capabilities, ", "))
}
@ -356,7 +356,7 @@ func printAccountDetails(acc *accountpkg.Account) {
fmt.Println("NONE")
} else {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED AT\tEXPIRING AT\n")
fmt.Fprint(w, "ID\tISSUED AT\tEXPIRING AT\n")
for _, t := range acc.Tokens {
expiresAtFormatted := "never"
if t.ExpiresAt > 0 {

View file

@ -240,7 +240,7 @@ func printStatsSummary(clusters []ClusterWithInfo) {
avgResourcesByShard := totalResourcesCount / int64(len(resourcesCountByShard))
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "SHARD\tRESOURCES COUNT\n")
_, _ = fmt.Fprint(w, "SHARD\tRESOURCES COUNT\n")
for shard := 0; shard < len(resourcesCountByShard); shard++ {
cnt := resourcesCountByShard[shard]
percent := (float64(cnt) / float64(avgResourcesByShard)) * 100.0
@ -318,7 +318,7 @@ func NewClusterNamespacesCommand() *cobra.Command {
err := runClusterNamespacesCommand(ctx, clientConfig, func(_ *versioned.Clientset, _ db.ArgoDB, clusters map[string][]string) error {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "CLUSTER\tNAMESPACES\n")
_, _ = fmt.Fprint(w, "CLUSTER\tNAMESPACES\n")
for cluster, namespaces := range clusters {
// print shortest namespace names first
@ -495,7 +495,7 @@ argocd admin cluster stats target-cluster`,
errors.CheckError(err)
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "SERVER\tSHARD\tCONNECTION\tNAMESPACES COUNT\tAPPS COUNT\tRESOURCES COUNT\n")
_, _ = fmt.Fprint(w, "SERVER\tSHARD\tCONNECTION\tNAMESPACES COUNT\tAPPS COUNT\tRESOURCES COUNT\n")
for _, cluster := range clusters {
_, _ = fmt.Fprintf(w, "%s\t%d\t%s\t%d\t%d\t%d\n", cluster.Server, cluster.Shard, cluster.Info.ConnectionState.Status, len(cluster.Namespaces), cluster.Info.ApplicationsCount, cluster.Info.CacheInfo.ResourcesCount)
}

View file

@ -149,6 +149,7 @@ func NewGenRepoSpecCommand() *cobra.Command {
repoOpts.Repo.EnableOCI = repoOpts.EnableOci
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.InsecureOCIForceHttp = repoOpts.InsecureOCIForceHTTP
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.CheckError(stderrors.New("must specify --name for repos of type 'helm'"))

View file

@ -313,7 +313,7 @@ argocd admin settings validate --group accounts --group plugins --load-cluster-s
_, _ = fmt.Fprintf(os.Stdout, "%s\n", logs)
}
if i != len(groups)-1 {
_, _ = fmt.Fprintf(os.Stdout, "\n")
_, _ = fmt.Fprint(os.Stdout, "\n")
}
}
},
@ -429,7 +429,7 @@ argocd admin settings resource-overrides ignore-differences ./deploy.yaml --argo
return
}
_, _ = fmt.Printf("Following fields are ignored:\n\n")
_, _ = fmt.Print("Following fields are ignored:\n\n")
_ = cli.PrintDiff(res.GetName(), &res, normalizedRes)
})
},
@ -476,7 +476,7 @@ argocd admin settings resource-overrides ignore-resource-updates ./deploy.yaml -
return
}
_, _ = fmt.Printf("Following fields are ignored:\n\n")
_, _ = fmt.Print("Following fields are ignored:\n\n")
_ = cli.PrintDiff(res.GetName(), &res, normalizedRes)
})
},
@ -551,7 +551,7 @@ argocd admin settings resource-overrides action list /tmp/deploy.yaml --argocd-c
})
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "NAME\tDISABLED\n")
_, _ = fmt.Fprint(w, "NAME\tDISABLED\n")
for _, action := range availableActions {
_, _ = fmt.Fprintf(w, "%s\t%s\n", action.Name, strconv.FormatBool(action.Disabled))
}
@ -622,7 +622,7 @@ argocd admin settings resource-overrides action /tmp/deploy.yaml restart --argoc
return
}
_, _ = fmt.Printf("Following fields have been changed:\n\n")
_, _ = fmt.Print("Following fields have been changed:\n\n")
_ = cli.PrintDiff(res.GetName(), &res, result)
case lua.CreateOperation:
yamlBytes, err := yaml.Marshal(impactedResource.UnstructuredObj)

View file

@ -182,7 +182,7 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
// Exactly one of --namespace or --policy-file must be given.
if (!nsOverride && policyFile == "") || (nsOverride && policyFile != "") {
c.HelpFunc()(c, args)
log.Fatalf("please provide exactly one of --policy-file or --namespace")
log.Fatal("please provide exactly one of --policy-file or --namespace")
}
restConfig, err := clientConfig.ClientConfig()
@ -264,12 +264,12 @@ argocd admin settings rbac validate --namespace argocd
if len(args) > 0 {
c.HelpFunc()(c, args)
log.Fatalf("too many arguments")
log.Fatal("too many arguments")
}
if (namespace == "" && policyFile == "") || (namespace != "" && policyFile != "") {
c.HelpFunc()(c, args)
log.Fatalf("please provide exactly one of --policy-file or --namespace")
log.Fatal("please provide exactly one of --policy-file or --namespace")
}
restConfig, err := clientConfig.ClientConfig()
@ -284,13 +284,13 @@ argocd admin settings rbac validate --namespace argocd
userPolicy, _, _ := getPolicy(ctx, policyFile, realClientset, namespace)
if userPolicy != "" {
if err := rbac.ValidatePolicy(userPolicy); err == nil {
fmt.Printf("Policy is valid.\n")
fmt.Print("Policy is valid.\n")
os.Exit(0)
}
fmt.Printf("Policy is invalid: %v\n", err)
os.Exit(1)
}
log.Fatalf("Policy is empty or could not be loaded.")
log.Fatal("Policy is empty or could not be loaded.")
},
}
clientConfig = cli.AddKubectlFlagsToCmd(command)

View file

@ -693,7 +693,7 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
}
if deny || !deny && !allow && inactiveAllows {
s, err := windows.CanSync(true)
s, err := windows.CanSync(true, nil)
if err == nil && s {
status = "Manual Allowed"
} else {
@ -757,7 +757,7 @@ func printAppSourceDetails(appSrc *argoappv1.ApplicationSource) {
}
func printAppConditions(w io.Writer, app *argoappv1.Application) {
_, _ = fmt.Fprintf(w, "CONDITION\tMESSAGE\tLAST TRANSITION\n")
_, _ = fmt.Fprint(w, "CONDITION\tMESSAGE\tLAST TRANSITION\n")
for _, item := range app.Status.Conditions {
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\n", item.Type, item.Message, item.LastTransitionTime)
}
@ -829,7 +829,7 @@ func printHelmParams(helm *argoappv1.ApplicationSourceHelm) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
if helm != nil {
fmt.Println()
_, _ = fmt.Fprintf(w, "NAME\tVALUE\n")
_, _ = fmt.Fprint(w, "NAME\tVALUE\n")
for _, p := range helm.Parameters {
_, _ = fmt.Fprintf(w, "%s\t%s\n", p.Name, truncateString(p.Value, paramLenLimit))
}
@ -1365,7 +1365,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
serverSideDiff = hasServerSideDiffAnnotation
} else if serverSideDiff && !hasServerSideDiffAnnotation {
// Flag explicitly set to true, but app annotation is not set
fmt.Fprintf(os.Stderr, "Warning: Application does not have ServerSideDiff=true annotation.\n")
fmt.Fprint(os.Stderr, "Warning: Application does not have ServerSideDiff=true annotation.\n")
}
// Server side diff with local requires server side generate to be set as there will be a mismatch with client-generated manifests.
@ -1418,7 +1418,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
diffOption.serversideRes = res
} else {
fmt.Fprintf(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.")
fmt.Fprint(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.")
conn, clusterIf := clientset.NewClusterClientOrDie()
defer utilio.Close(conn)
cluster, err := clusterIf.Get(ctx, &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
@ -2104,7 +2104,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
// printAppResources prints the resources of an application in a tabwriter table
func printAppResources(w io.Writer, app *argoappv1.Application) {
_, _ = fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
_, _ = fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
for _, res := range getResourceStates(app, nil) {
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n", res.Group, res.Kind, res.Namespace, res.Name, res.Status, res.Health, res.Hook, res.Message)
}
@ -2112,7 +2112,7 @@ func printAppResources(w io.Writer, app *argoappv1.Application) {
func printTreeView(nodeMapping map[string]argoappv1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, mapNodeNameToResourceState map[string]*resourceState) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "KIND/NAME\tSTATUS\tHEALTH\tMESSAGE\n")
_, _ = fmt.Fprint(w, "KIND/NAME\tSTATUS\tHEALTH\tMESSAGE\n")
for uid := range parentNodes {
treeViewAppGet("", nodeMapping, parentChildMapping, nodeMapping[uid], mapNodeNameToResourceState, w)
}
@ -2121,7 +2121,7 @@ func printTreeView(nodeMapping map[string]argoappv1.ResourceNode, parentChildMap
func printTreeViewDetailed(nodeMapping map[string]argoappv1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, mapNodeNameToResourceState map[string]*resourceState) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "KIND/NAME\tSTATUS\tHEALTH\tAGE\tMESSAGE\tREASON\n")
fmt.Fprint(w, "KIND/NAME\tSTATUS\tHEALTH\tAGE\tMESSAGE\tREASON\n")
for uid := range parentNodes {
detailedTreeViewAppGet("", nodeMapping, parentChildMapping, nodeMapping[uid], mapNodeNameToResourceState, w)
}
@ -2334,7 +2334,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
if app.Spec.HasMultipleSources() {
if revision != "" {
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-position instead.")
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-positions instead.")
return
}
@ -2453,7 +2453,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
foundDiffs = findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, appName, appNs, serverSideDiffConcurrency, serverSideDiffMaxBatchKB)
if !foundDiffs {
fmt.Printf("====== No Differences found ======\n")
fmt.Print("====== No Differences found ======\n")
// if no differences found, then no need to sync
return
}
@ -2973,7 +2973,7 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string, sour
source.Helm.AddParameter(*newParam)
}
default:
log.Fatalf("Parameters can only be set against Helm applications")
log.Fatal("Parameters can only be set against Helm applications")
}
}
@ -3028,13 +3028,13 @@ func printApplicationHistoryTable(revHistory []argoappv1.RevisionHistory) {
}
for i, key := range varHistoryKeys {
_, _ = fmt.Fprintf(w, "SOURCE\t%s\n", key)
_, _ = fmt.Fprintf(w, "ID\tDATE\tREVISION\n")
_, _ = fmt.Fprint(w, "ID\tDATE\tREVISION\n")
for _, history := range varHistory[key] {
_, _ = fmt.Fprintf(w, "%d\t%s\t%s\n", history.id, history.date, history.revision)
}
// Add a newline if it's not the last iteration
if i < len(varHistoryKeys)-1 {
_, _ = fmt.Fprintf(w, "\n")
_, _ = fmt.Fprint(w, "\n")
}
}
_ = w.Flush()

View file

@ -124,7 +124,7 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
fmt.Println(string(jsonBytes))
case "":
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "GROUP\tKIND\tNAME\tACTION\tDISABLED\n")
fmt.Fprint(w, "GROUP\tKIND\tNAME\tACTION\tDISABLED\n")
for _, action := range availableActions {
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", action.Group, action.Kind, action.Name, action.Action, strconv.FormatBool(action.Disabled))
}

View file

@ -8,7 +8,7 @@ import (
"strings"
"text/tabwriter"
"gopkg.in/yaml.v3"
"go.yaml.in/yaml/v3"
"github.com/argoproj/argo-cd/v3/util/templates"
@ -217,9 +217,9 @@ func reconstructObject(extracted []any, fields []string, depth int) map[string]a
func printManifests(objs *[]unstructured.Unstructured, filteredFields bool, showName bool, output string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
if showName {
fmt.Fprintf(w, "FIELD\tRESOURCE NAME\tVALUE\n")
fmt.Fprint(w, "FIELD\tRESOURCE NAME\tVALUE\n")
} else {
fmt.Fprintf(w, "FIELD\tVALUE\n")
fmt.Fprint(w, "FIELD\tVALUE\n")
}
for i, o := range *objs {
@ -479,7 +479,7 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
switch output {
case "tree=detailed":
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\tAGE\tHEALTH\tREASON\n")
fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\tAGE\tHEALTH\tREASON\n")
if !orphaned || listAll {
mapUIDToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)
@ -491,7 +491,7 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
printDetailedTreeViewAppResourcesOrphaned(mapUIDToNode, mapParentToChild, parentNode, w)
}
case "tree":
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\n")
fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\n")
if !orphaned || listAll {
mapUIDToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)

View file

@ -40,6 +40,10 @@ var appSetExample = templates.Examples(`
# Delete an ApplicationSet
argocd appset delete APPSETNAME (APPSETNAME...)
# Namespace precedence for --appset-namespace (-N):
# - get/delete: if the argument is namespace/name, that namespace wins; -N is ignored.
# - create/generate: metadata.namespace in the YAML wins when set; -N applies only when the manifest omits namespace.
`)
// NewAppSetCommand returns a new instance of an `argocd appset` command
@ -64,8 +68,9 @@ func NewAppSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// NewApplicationSetGetCommand returns a new instance of an `argocd appset get` command
func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
output string
showParams bool
output string
showParams bool
appSetNamespace string
)
command := &cobra.Command{
Use: "get APPSETNAME",
@ -73,6 +78,13 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
Example: templates.Examples(`
# Get ApplicationSets
argocd appset get APPSETNAME
# Get ApplicationSet in a specific namespace using qualified name (namespace/name)
argocd appset get APPSET_NAMESPACE/APPSETNAME
# Get ApplicationSet in a specific namespace using --appset-namespace flag
argocd appset get --appset-namespace=APPSET_NAMESPACE APPSETNAME
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@ -85,7 +97,7 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
conn, appIf := acdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], "")
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], appSetNamespace)
appSet, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appSetName, AppsetNamespace: appSetNs})
errors.CheckError(err)
@ -113,6 +125,7 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().BoolVar(&showParams, "show-params", false, "Show ApplicationSet parameters and overrides")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Only get ApplicationSet from a namespace (ignored when qualified name is provided)")
return command
}
@ -121,6 +134,7 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
var (
output string
upsert, dryRun, wait bool
appSetNamespace string
)
command := &cobra.Command{
Use: "create",
@ -129,6 +143,9 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
# Create ApplicationSets
argocd appset create <filename or URL> (<filename or URL>...)
# Create ApplicationSet in a specific namespace using
argocd appset create --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
# Dry-run AppSet creation to see what applications would be managed
argocd appset create --dry-run <filename or URL> -o json | jq -r '.status.resources[].name'
`),
@ -145,7 +162,7 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
errors.CheckError(err)
if len(appsets) == 0 {
fmt.Printf("No ApplicationSets found while parsing the input file")
fmt.Print("No ApplicationSets found while parsing the input file")
os.Exit(1)
}
@ -157,6 +174,11 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
if appset.Namespace == "" && appSetNamespace != "" {
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
appset.Namespace = appSetNamespace
}
// Get app before creating to see if it is being updated or no change
existing, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appset.Name, AppsetNamespace: appset.Namespace})
if grpc.UnwrapGRPCStatus(err).Code() != codes.NotFound {
@ -218,18 +240,23 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
command.Flags().BoolVar(&dryRun, "dry-run", false, "Allows to evaluate the ApplicationSet template on the server to get a preview of the applications that would be created")
command.Flags().BoolVar(&wait, "wait", false, "Wait until the ApplicationSet's resources are up to date. Will block indefinitely if the ApplicationSet has errors")
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be created in (ignored when provided YAML file has namespace set in metadata)")
return command
}
// NewApplicationSetGenerateCommand returns a new instance of an `argocd appset generate` command
func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var output string
var appSetNamespace string
command := &cobra.Command{
Use: "generate",
Short: "Generate apps of ApplicationSet rendered templates",
Example: templates.Examples(`
# Generate apps of ApplicationSet rendered templates
argocd appset generate <filename or URL> (<filename or URL>...)
# Generate apps of ApplicationSet rendered templates in a specific namespace
argocd appset generate --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@ -244,7 +271,7 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
errors.CheckError(err)
if len(appsets) != 1 {
fmt.Printf("Input file must contain one ApplicationSet")
fmt.Print("Input file must contain one ApplicationSet")
os.Exit(1)
}
appset := appsets[0]
@ -252,6 +279,11 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
errors.Fatal(errors.ErrorGeneric, fmt.Sprintf("Error generating apps for ApplicationSet %s. ApplicationSet does not have Name field set", appset))
}
if appset.Namespace == "" && appSetNamespace != "" {
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
appset.Namespace = appSetNamespace
}
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
defer utilio.Close(conn)
@ -286,6 +318,7 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
},
}
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace used for generating Applications (ignored when provided YAML file has namespace set in metadata)")
return command
}
@ -338,8 +371,9 @@ func NewApplicationSetListCommand(clientOpts *argocdclient.ClientOptions) *cobra
// NewApplicationSetDeleteCommand returns a new instance of an `argocd appset delete` command
func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
var (
noPrompt bool
wait bool
noPrompt bool
wait bool
appSetNamespace string
)
command := &cobra.Command{
Use: "delete",
@ -347,6 +381,12 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
Example: templates.Examples(`
# Delete an applicationset
argocd appset delete APPSETNAME (APPSETNAME...)
# Delete ApplicationSet in a specific namespace using qualified name (namespace/name)
argocd appset delete APPSET_NAMESPACE/APPSETNAME
# Delete ApplicationSet in a specific namespace using --appset-namespace flag
argocd appset delete --appset-namespace=APPSET_NAMESPACE APPSETNAME
`),
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@ -375,7 +415,7 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
promptUtil := utils.NewPrompt(isTerminal && !noPrompt)
for _, appSetQualifiedName := range args {
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, "")
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, appSetNamespace)
appsetDeleteReq := applicationset.ApplicationSetDeleteRequest{
Name: appSetName,
@ -412,6 +452,7 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
}
command.Flags().BoolVarP(&noPrompt, "yes", "y", false, "Turn off prompting to confirm cascaded deletion of Application resources")
command.Flags().BoolVar(&wait, "wait", false, "Wait until deletion of the applicationset(s) completes")
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be deleted from (ignored when qualified name is provided)")
return command
}
@ -503,7 +544,7 @@ func printAppSetSummaryTable(appSet *arogappsetv1.ApplicationSet) {
}
func printAppSetConditions(w io.Writer, appSet *arogappsetv1.ApplicationSet) {
_, _ = fmt.Fprintf(w, "CONDITION\tSTATUS\tMESSAGE\tLAST TRANSITION\n")
_, _ = fmt.Fprint(w, "CONDITION\tSTATUS\tMESSAGE\tLAST TRANSITION\n")
for _, item := range appSet.Status.Conditions {
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", item.Type, item.Status, item.Message, item.LastTransitionTime)
}

View file

@ -352,7 +352,7 @@ func NewCertListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// Print table of certificate info
func printCertTable(certs []appsv1.RepositoryCertificate, sortOrder string) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
fmt.Fprint(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
switch sortOrder {
case "hostname", "":

View file

@ -377,15 +377,15 @@ func formatNamespaces(cluster argoappv1.Cluster) string {
func printClusterDetails(clusters []argoappv1.Cluster) {
for _, cluster := range clusters {
fmt.Printf("Cluster information\n\n")
fmt.Print("Cluster information\n\n")
fmt.Printf(" Server URL: %s\n", cluster.Server)
fmt.Printf(" Server Name: %s\n", strWithDefault(cluster.Name, "-"))
fmt.Printf(" Server Version: %s\n", cluster.Info.ServerVersion)
fmt.Printf(" Namespaces: %s\n", formatNamespaces(cluster))
fmt.Printf("\nTLS configuration\n\n")
fmt.Print("\nTLS configuration\n\n")
fmt.Printf(" Client cert: %v\n", len(cluster.Config.CertData) != 0)
fmt.Printf(" Cert validation: %v\n", !cluster.Config.Insecure)
fmt.Printf("\nAuthentication\n\n")
fmt.Print("\nAuthentication\n\n")
fmt.Printf(" Basic authentication: %v\n", cluster.Config.Username != "")
fmt.Printf(" oAuth authentication: %v\n", cluster.Config.BearerToken != "")
fmt.Printf(" AWS authentication: %v\n", cluster.Config.AWSAuthConfig != nil)
@ -468,7 +468,7 @@ argocd cluster rm cluster-name`,
// Print table of cluster information
func printClusterTable(clusters []argoappv1.Cluster) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
_, _ = fmt.Fprint(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
for _, c := range clusters {
server := c.Server
if len(c.Namespaces) > 0 {

View file

@ -151,7 +151,7 @@ func NewGPGAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
if len(resp.Skipped) > 0 {
fmt.Printf(", and %d key(s) were skipped because they exist already", len(resp.Skipped))
}
fmt.Printf(".\n")
fmt.Print(".\n")
},
}
command.Flags().StringVarP(&fromFile, "from", "f", "", "Path to the file that contains the GPG public key to import")
@ -192,7 +192,7 @@ func NewGPGDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
// Print table of certificate info
func printKeyTable(keys []appsv1.GnuPGPublicKey) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "KEYID\tTYPE\tIDENTITY\n")
fmt.Fprint(w, "KEYID\tTYPE\tIDENTITY\n")
for _, k := range keys {
fmt.Fprintf(w, "%s\t%s\t%s\n", k.KeyID, strings.ToUpper(k.SubType), k.Owner)

View file

@ -274,7 +274,7 @@ func oauth2Login(
// flow where the id_token is contained in a URL fragment, making it inaccessible to be
// read from the request. This javascript will redirect the browser to send the
// fragments as query parameters so our callback handler can read and return token.
fmt.Fprintf(w, `<script>window.location.search = window.location.hash.substring(1)</script>`)
fmt.Fprint(w, `<script>window.location.search = window.location.hash.substring(1)</script>`)
return
}
@ -351,7 +351,7 @@ func oauth2Login(
if errMsg != "" {
log.Fatal(errMsg)
}
fmt.Printf("Authentication successful\n")
fmt.Print("Authentication successful\n")
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
defer cancel()
_ = srv.Shutdown(ctx)
@ -375,7 +375,7 @@ func passwordLogin(ctx context.Context, acdClient argocdclient.Client, username,
func ssoAuthFlow(url string, ssoLaunchBrowser bool) {
if ssoLaunchBrowser {
fmt.Printf("Opening system default browser for authentication\n")
fmt.Print("Opening system default browser for authentication\n")
err := open.Start(url)
errors.CheckError(err)
} else {

View file

@ -44,7 +44,7 @@ argocd logout cd.argoproj.io
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errutil.CheckError(err)
if localCfg == nil {
log.Fatalf("Nothing to logout from")
log.Fatal("Nothing to logout from")
}
promptUtil := utils.NewPrompt(clientOpts.PromptsEnabled)

View file

@ -493,7 +493,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
for _, item := range proj.Spec.SourceRepos {
if item == "*" {
fmt.Printf("Source repository '*' already allowed in project\n")
fmt.Print("Source repository '*' already allowed in project\n")
return
}
if git.SameURL(item, url) {
@ -535,7 +535,7 @@ func NewProjectAddSourceNamespace(clientOpts *argocdclient.ClientOptions) *cobra
for _, item := range proj.Spec.SourceNamespaces {
if item == "*" || item == srcNamespace {
fmt.Printf("Source namespace '*' already allowed in project\n")
fmt.Print("Source namespace '*' already allowed in project\n")
return
}
}
@ -868,7 +868,7 @@ func printProjectNames(projects []v1alpha1.AppProject) {
// Print table of project info
func printProjectTable(projects []v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
fmt.Fprint(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
for _, p := range projects {
printProjectLine(w, &p)
}

View file

@ -421,7 +421,7 @@ fa9d3517-c52d-434c-9bff-215b38508842 2023-10-08T11:08:18+01:00 Never
}
writer := tabwriter.NewWriter(os.Stdout, 0, 0, 4, ' ', 0)
_, err = fmt.Fprintf(writer, "ID\tISSUED AT\tEXPIRES AT\n")
_, err = fmt.Fprint(writer, "ID\tISSUED AT\tEXPIRES AT\n")
errors.CheckError(err)
tokenRowFormat := "%s\t%v\t%v\n"
@ -515,7 +515,7 @@ func printProjectRoleListName(roles []v1alpha1.ProjectRole) {
// Print table of project roles
func printProjectRoleListTable(roles []v1alpha1.ProjectRole) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
fmt.Fprint(w, "ROLE-NAME\tDESCRIPTION\n")
for _, role := range roles {
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
}
@ -603,9 +603,9 @@ ID ISSUED-AT EXPIRES-AT
printRoleFmtStr := "%-15s%s\n"
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
fmt.Printf("Policies:\n")
fmt.Print("Policies:\n")
fmt.Printf("%s\n", proj.ProjectPoliciesString())
fmt.Printf("Groups:\n")
fmt.Print("Groups:\n")
// if the group exists in the role
// range over each group and print it
if v1alpha1.RoleGroupExists(role) {
@ -615,9 +615,9 @@ ID ISSUED-AT EXPIRES-AT
} else {
fmt.Println("<none>")
}
fmt.Printf("JWT Tokens:\n")
fmt.Print("JWT Tokens:\n")
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
fmt.Fprint(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
for _, token := range proj.Status.JWTTokensByRole[roleName].Items {
expiresAt := "<none>"
if token.ExpiresAt > 0 {

View file

@ -42,6 +42,8 @@ argocd proj windows list <project-name>`,
}
roleCommand.AddCommand(NewProjectWindowsDisableManualSyncCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsEnableManualSyncCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsDisableSyncOverrunCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsEnableSyncOverrunCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsAddWindowCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsDeleteCommand(clientOpts))
roleCommand.AddCommand(NewProjectWindowsListCommand(clientOpts))
@ -49,18 +51,13 @@ argocd proj windows list <project-name>`,
return roleCommand
}
// NewProjectWindowsDisableManualSyncCommand returns a new instance of an `argocd proj windows disable-manual-sync` command
func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command := &cobra.Command{
Use: "disable-manual-sync PROJECT ID",
Short: "Disable manual sync for a sync window",
Long: "Disable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Example: `
#Disable manual sync for a sync window for the Project
argocd proj windows disable-manual-sync PROJECT ID
#Disabling manual sync for a windows set on the default project with Id 0
argocd proj windows disable-manual-sync default 0`,
// newProjectWindowsToggleCommand creates a command for toggling a boolean field on a sync window
func newProjectWindowsToggleCommand(clientOpts *argocdclient.ClientOptions, use, short, long, example string, updateFn func(*v1alpha1.SyncWindow)) *cobra.Command {
return &cobra.Command{
Use: use,
Short: short,
Long: long,
Example: example,
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@ -79,26 +76,51 @@ argocd proj windows disable-manual-sync default 0`,
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
found := false
for i, window := range proj.Spec.SyncWindows {
if id == i {
window.ManualSync = false
updateFn(window)
found = true
break
}
}
if !found {
errors.CheckError(fmt.Errorf("window with id '%d' not found", id))
}
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
},
}
return command
}
// NewProjectWindowsDisableManualSyncCommand returns a new instance of an `argocd proj windows disable-manual-sync` command
func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
return newProjectWindowsToggleCommand(
clientOpts,
"disable-manual-sync PROJECT ID",
"Disable manual sync for a sync window",
"Disable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
`
#Disable manual sync for a sync window for the Project
argocd proj windows disable-manual-sync PROJECT ID
#Disabling manual sync for a windows set on the default project with Id 0
argocd proj windows disable-manual-sync default 0`,
func(window *v1alpha1.SyncWindow) {
window.ManualSync = false
},
)
}
// NewProjectWindowsEnableManualSyncCommand returns a new instance of an `argocd proj windows enable-manual-sync` command
func NewProjectWindowsEnableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
command := &cobra.Command{
Use: "enable-manual-sync PROJECT ID",
Short: "Enable manual sync for a sync window",
Long: "Enable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
Example: `
return newProjectWindowsToggleCommand(
clientOpts,
"enable-manual-sync PROJECT ID",
"Enable manual sync for a sync window",
"Enable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
`
#Enabling manual sync for a general case
argocd proj windows enable-manual-sync PROJECT ID
@ -107,35 +129,48 @@ argocd proj windows enable-manual-sync default 2
#Enabling manual sync with a custom message
argocd proj windows enable-manual-sync my-app-project --message "Manual sync initiated by admin`,
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
if len(args) != 2 {
c.HelpFunc()(c, args)
os.Exit(1)
}
projName := args[0]
id, err := strconv.Atoi(args[1])
errors.CheckError(err)
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
defer utilio.Close(conn)
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
for i, window := range proj.Spec.SyncWindows {
if id == i {
window.ManualSync = true
}
}
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
func(window *v1alpha1.SyncWindow) {
window.ManualSync = true
},
}
return command
)
}
// NewProjectWindowsDisableSyncOverrunCommand returns a new instance of an `argocd proj windows disable-sync-overrun` command
func NewProjectWindowsDisableSyncOverrunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
return newProjectWindowsToggleCommand(
clientOpts,
"disable-sync-overrun PROJECT ID",
"Disable sync overrun for a sync window",
"Disable sync overrun for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
`
#Disable sync overrun for a sync window for the Project
argocd proj windows disable-sync-overrun PROJECT ID
#Disabling sync overrun for a window set on the default project with Id 0
argocd proj windows disable-sync-overrun default 0`,
func(window *v1alpha1.SyncWindow) {
window.SyncOverrun = false
},
)
}
// NewProjectWindowsEnableSyncOverrunCommand returns a new instance of an `argocd proj windows enable-sync-overrun` command
func NewProjectWindowsEnableSyncOverrunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
return newProjectWindowsToggleCommand(
clientOpts,
"enable-sync-overrun PROJECT ID",
"Enable sync overrun for a sync window",
"Enable sync overrun for a sync window. When enabled on a deny window, syncs that started before the deny window will be allowed to continue. When enabled on an allow window, syncs that started during the allow window can continue after the window ends. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
`
#Enable sync overrun for a sync window
argocd proj windows enable-sync-overrun PROJECT ID
#Enabling sync overrun for a window set on the default project with Id 2
argocd proj windows enable-sync-overrun default 2`,
func(window *v1alpha1.SyncWindow) {
window.SyncOverrun = true
},
)
}
// NewProjectWindowsAddWindowCommand returns a new instance of an `argocd proj windows add` command
@ -148,6 +183,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
namespaces []string
clusters []string
manualSync bool
syncOverrun bool
timeZone string
andOperator bool
description string
@ -164,7 +200,7 @@ argocd proj windows add PROJECT \
--applications "*" \
--description "Ticket 123"
#Add a deny sync window with the ability to manually sync.
#Add a deny sync window with the ability to manually sync and sync overrun.
argocd proj windows add PROJECT \
--kind deny \
--schedule "30 10 * * *" \
@ -173,8 +209,8 @@ argocd proj windows add PROJECT \
--namespaces "default,\\*-prod" \
--clusters "prod,staging" \
--manual-sync \
--description "Ticket 123"
`,
--sync-overrun \
--description "Ticket 123"`,
Run: func(c *cobra.Command, args []string) {
ctx := c.Context()
@ -189,7 +225,7 @@ argocd proj windows add PROJECT \
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
errors.CheckError(err)
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone, andOperator, description)
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone, andOperator, description, syncOverrun)
errors.CheckError(err)
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
@ -203,6 +239,7 @@ argocd proj windows add PROJECT \
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
command.Flags().BoolVar(&manualSync, "manual-sync", false, "Allow manual syncs for both deny and allow windows")
command.Flags().BoolVar(&syncOverrun, "sync-overrun", false, "Allow syncs to continue: for deny windows, syncs that started before the window; for allow windows, syncs that started during the window")
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window")
command.Flags().BoolVar(&andOperator, "use-and-operator", false, "Use AND operator for matching applications, namespaces and clusters instead of the default OR operator")
command.Flags().StringVar(&description, "description", "", `Sync window description`)
@ -248,7 +285,7 @@ argocd proj windows delete new-project 1`,
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
errors.CheckError(err)
} else {
fmt.Printf("The command to delete the sync window was cancelled\n")
fmt.Print("The command to delete the sync window was cancelled\n")
}
},
}
@ -362,7 +399,7 @@ argocd proj windows list test-project`,
func printSyncWindows(proj *v1alpha1.AppProject) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
var fmtStr string
headers := []any{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "TIMEZONE", "USEANDOPERATOR"}
headers := []any{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"}
fmtStr = strings.Repeat("%s\t", len(headers)) + "\n"
fmt.Fprintf(w, fmtStr, headers...)
if proj.Spec.SyncWindows.HasWindows() {
@ -378,6 +415,7 @@ func printSyncWindows(proj *v1alpha1.AppProject) {
formatListOutput(window.Namespaces),
formatListOutput(window.Clusters),
formatBoolEnabledOutput(window.ManualSync),
formatBoolEnabledOutput(window.SyncOverrun),
window.TimeZone,
formatBoolEnabledOutput(window.UseAndOperator),
}

View file

@ -1,6 +1,11 @@
package commands
import (
"bytes"
"io"
"os"
"regexp"
"strings"
"testing"
"github.com/stretchr/testify/assert"
@ -11,30 +16,229 @@ import (
)
func TestPrintSyncWindows(t *testing.T) {
proj := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{Name: "test-project"},
Spec: v1alpha1.AppProjectSpec{
SyncWindows: v1alpha1.SyncWindows{
{
Kind: "allow",
Schedule: "* * * * *",
Duration: "1h",
Applications: []string{"app1"},
Namespaces: []string{"ns1"},
Clusters: []string{"cluster1"},
ManualSync: true,
UseAndOperator: true,
tests := []struct {
name string
project *v1alpha1.AppProject
expectedHeader []string
expectedRows [][]string
}{
{
name: "Project with multiple sync windows including syncOverrun",
project: &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
},
Spec: v1alpha1.AppProjectSpec{
SyncWindows: v1alpha1.SyncWindows{
{
Kind: "allow",
Schedule: "0 0 * * *",
Duration: "1h",
Applications: []string{"app1", "app2"},
Namespaces: []string{"default"},
Clusters: []string{"cluster1"},
ManualSync: false,
SyncOverrun: false,
TimeZone: "UTC",
UseAndOperator: false,
},
{
Kind: "deny",
Schedule: "0 12 * * *",
Duration: "2h",
Applications: []string{"*"},
Namespaces: []string{"production"},
Clusters: []string{"*"},
ManualSync: true,
SyncOverrun: true,
TimeZone: "America/New_York",
UseAndOperator: true,
},
},
},
},
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
expectedRows: [][]string{
{"0", "Inactive", "allow", "0 0 * * *", "1h", "app1,app2", "default", "cluster1", "Disabled", "Disabled", "UTC", "Disabled"},
{"1", "Inactive", "deny", "0 12 * * *", "2h", "*", "production", "*", "Enabled", "Enabled", "America/New_York", "Enabled"},
},
},
{
name: "Project with empty sync window lists",
project: &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
},
Spec: v1alpha1.AppProjectSpec{
SyncWindows: v1alpha1.SyncWindows{
{
Kind: "allow",
Schedule: "0 1 * * *",
Duration: "30m",
Applications: []string{},
Namespaces: []string{},
Clusters: []string{},
ManualSync: false,
SyncOverrun: false,
TimeZone: "UTC",
UseAndOperator: false,
},
},
},
},
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
expectedRows: [][]string{
{"0", "Inactive", "allow", "0 1 * * *", "30m", "-", "-", "-", "Disabled", "Disabled", "UTC", "Disabled"},
},
},
{
name: "Project with no sync windows",
project: &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
},
Spec: v1alpha1.AppProjectSpec{
SyncWindows: v1alpha1.SyncWindows{},
},
},
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
expectedRows: [][]string{},
},
}
output, err := captureOutput(func() error {
printSyncWindows(proj)
return nil
})
require.NoError(t, err)
t.Log(output)
assert.Contains(t, output, "ID STATUS KIND SCHEDULE DURATION APPLICATIONS NAMESPACES CLUSTERS MANUALSYNC TIMEZONE USEANDOPERATOR")
assert.Contains(t, output, "0 Active allow * * * * * 1h app1 ns1 cluster1 Enabled Enabled")
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Capture stdout
oldStdout := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
// Call the function
printSyncWindows(tt.project)
// Restore stdout
w.Close()
os.Stdout = oldStdout
// Read captured output
var buf bytes.Buffer
_, err := io.Copy(&buf, r)
require.NoError(t, err)
output := buf.String()
// Parse the table output
lines := strings.Split(strings.TrimSpace(output), "\n")
assert.GreaterOrEqual(t, len(lines), 1, "Should have at least a header line")
// Parse header line (split by whitespace for headers since they don't contain spaces)
headerLine := lines[0]
headerFields := strings.Fields(headerLine)
assert.Len(t, headerFields, len(tt.expectedHeader), "Header should have correct number of columns")
assert.Equal(t, tt.expectedHeader, headerFields, "Header columns should match expected")
// Parse data rows
dataLines := lines[1:]
assert.Len(t, dataLines, len(tt.expectedRows), "Should have expected number of data rows")
for i, dataLine := range dataLines {
// Split by 2 or more spaces (tabwriter output uses multiple spaces as separators)
re := regexp.MustCompile(`\s{2,}`)
fields := re.Split(strings.TrimSpace(dataLine), -1)
assert.Len(t, fields, len(tt.expectedRows[i]), "Row %d should have correct number of columns", i)
for j, expectedValue := range tt.expectedRows[i] {
assert.Equal(t, expectedValue, fields[j], "Row %d, column %d should match expected value", i, j)
}
}
})
}
}
func TestFormatListOutput(t *testing.T) {
tests := []struct {
name string
input []string
expected string
}{
{
name: "Empty list",
input: []string{},
expected: "-",
},
{
name: "Single item",
input: []string{"app1"},
expected: "app1",
},
{
name: "Multiple items",
input: []string{"app1", "app2", "app3"},
expected: "app1,app2,app3",
},
{
name: "Wildcard",
input: []string{"*"},
expected: "*",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := formatListOutput(tt.input)
assert.Equal(t, tt.expected, result)
})
}
}
func TestFormatBoolOutput(t *testing.T) {
tests := []struct {
name string
input bool
expected string
}{
{
name: "Active",
input: true,
expected: "Active",
},
{
name: "Inactive",
input: false,
expected: "Inactive",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := formatBoolOutput(tt.input)
assert.Equal(t, tt.expected, result)
})
}
}
func TestFormatBoolEnabledOutput(t *testing.T) {
tests := []struct {
name string
input bool
expected string
}{
{
name: "Enabled",
input: true,
expected: "Enabled",
},
{
name: "Disabled",
input: false,
expected: "Disabled",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := formatBoolEnabledOutput(tt.input)
assert.Equal(t, tt.expected, result)
})
}
}

View file

@ -40,7 +40,7 @@ func NewReloginCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
errors.CheckError(err)
if localCfg == nil {
log.Fatalf("No context found. Login using `argocd login`")
log.Fatal("No context found. Login using `argocd login`")
}
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
errors.CheckError(err)

View file

@ -102,6 +102,12 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
# Add a private Git repository on Google Cloud Sources via GCP service account credentials
argocd repo add https://source.developers.google.com/p/my-google-cloud-project/r/my-repo --gcp-service-account-key-path service-account-key.json
# Add a private Git repository on Azure Devops via Azure Service Principal credentials
argocd repo add https://dev.azure.com/my-devops-organization/my-devops-project/_git/my-devops-repo --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012
# Add a private Git repository on Azure Devops via Azure Service Principal credentials when not using default Azure public cloud
argocd repo add https://dev.azure.com/my-devops-organization/my-devops-project/_git/my-devops-repo --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012 --azure-active-directory-endpoint https://login.microsoftonline.de
`
command := &cobra.Command{
@ -191,7 +197,12 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
repoOpts.Repo.NoProxy = repoOpts.NoProxy
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
repoOpts.Repo.AzureServicePrincipalTenantId = repoOpts.AzureServicePrincipalTenantId
repoOpts.Repo.AzureServicePrincipalClientId = repoOpts.AzureServicePrincipalClientId
repoOpts.Repo.AzureServicePrincipalClientSecret = repoOpts.AzureServicePrincipalClientSecret
repoOpts.Repo.AzureActiveDirectoryEndpoint = repoOpts.AzureActiveDirectoryEndpoint
repoOpts.Repo.Depth = repoOpts.Depth
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")
@ -225,27 +236,31 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
// are high that we do not have the given URL pointing to a valid Git
// repo anyway.
repoAccessReq := repositorypkg.RepoAccessQuery{
Repo: repoOpts.Repo.Repo,
Type: repoOpts.Repo.Type,
Name: repoOpts.Repo.Name,
Username: repoOpts.Repo.Username,
Password: repoOpts.Repo.Password,
BearerToken: repoOpts.Repo.BearerToken,
SshPrivateKey: repoOpts.Repo.SSHPrivateKey,
TlsClientCertData: repoOpts.Repo.TLSClientCertData,
TlsClientCertKey: repoOpts.Repo.TLSClientCertKey,
Insecure: repoOpts.Repo.IsInsecure(),
EnableOci: repoOpts.Repo.EnableOCI,
GithubAppPrivateKey: repoOpts.Repo.GithubAppPrivateKey,
GithubAppID: repoOpts.Repo.GithubAppId,
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
Proxy: repoOpts.Proxy,
Project: repoOpts.Repo.Project,
GcpServiceAccountKey: repoOpts.Repo.GCPServiceAccountKey,
ForceHttpBasicAuth: repoOpts.Repo.ForceHttpBasicAuth,
UseAzureWorkloadIdentity: repoOpts.Repo.UseAzureWorkloadIdentity,
InsecureOciForceHttp: repoOpts.Repo.InsecureOCIForceHttp,
Repo: repoOpts.Repo.Repo,
Type: repoOpts.Repo.Type,
Name: repoOpts.Repo.Name,
Username: repoOpts.Repo.Username,
Password: repoOpts.Repo.Password,
BearerToken: repoOpts.Repo.BearerToken,
SshPrivateKey: repoOpts.Repo.SSHPrivateKey,
TlsClientCertData: repoOpts.Repo.TLSClientCertData,
TlsClientCertKey: repoOpts.Repo.TLSClientCertKey,
Insecure: repoOpts.Repo.IsInsecure(),
EnableOci: repoOpts.Repo.EnableOCI,
GithubAppPrivateKey: repoOpts.Repo.GithubAppPrivateKey,
GithubAppID: repoOpts.Repo.GithubAppId,
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
Proxy: repoOpts.Proxy,
Project: repoOpts.Repo.Project,
GcpServiceAccountKey: repoOpts.Repo.GCPServiceAccountKey,
ForceHttpBasicAuth: repoOpts.Repo.ForceHttpBasicAuth,
UseAzureWorkloadIdentity: repoOpts.Repo.UseAzureWorkloadIdentity,
InsecureOciForceHttp: repoOpts.Repo.InsecureOCIForceHttp,
AzureServicePrincipalTenantId: repoOpts.Repo.AzureServicePrincipalTenantId,
AzureServicePrincipalClientId: repoOpts.Repo.AzureServicePrincipalClientId,
AzureServicePrincipalClientSecret: repoOpts.Repo.AzureServicePrincipalClientSecret,
AzureActiveDirectoryEndpoint: repoOpts.Repo.AzureActiveDirectoryEndpoint,
}
_, err = repoIf.ValidateAccess(ctx, &repoAccessReq)
errors.CheckError(err)
@ -314,7 +329,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
// Print table of repo info
func printRepoTable(repos appsv1.Repositories) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
_, _ = fmt.Fprint(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
for _, r := range repos {
var hasCreds string
if r.InheritedCreds {

View file

@ -83,6 +83,12 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
# Add credentials with GCP credentials for all repositories under https://source.developers.google.com/p/my-google-cloud-project/r/
argocd repocreds add https://source.developers.google.com/p/my-google-cloud-project/r/ --gcp-service-account-key-path service-account-key.json
# Add credentials with Azure Service Principal to use for all repositories under https://dev.azure.com/my-devops-organization
argocd repocreds add https://dev.azure.com/my-devops-organization --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012
# Add credentials with Azure Service Principal to use for all repositories under https://dev.azure.com/my-devops-organization when not using default Azure public cloud
argocd repocreds add https://dev.azure.com/my-devops-organization --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012 --azure-active-directory-endpoint https://login.microsoftonline.de
`
command := &cobra.Command{
@ -201,6 +207,10 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
command.Flags().BoolVar(&repo.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force basic auth when connecting via HTTP")
command.Flags().BoolVar(&repo.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().StringVar(&repo.Proxy, "proxy-url", "", "If provided, this URL will be used to connect via proxy")
command.Flags().StringVar(&repo.AzureServicePrincipalClientId, "azure-service-principal-client-id", "", "client id of the Azure Service Principal")
command.Flags().StringVar(&repo.AzureServicePrincipalClientSecret, "azure-service-principal-client-secret", "", "client secret of the Azure Service Principal")
command.Flags().StringVar(&repo.AzureServicePrincipalTenantId, "azure-service-principal-tenant-id", "", "tenant id of the Azure Service Principal")
command.Flags().StringVar(&repo.AzureActiveDirectoryEndpoint, "azure-active-directory-endpoint", "", "Active Directory endpoint when not using default Azure public cloud (e.g. https://login.microsoftonline.de)")
return command
}
@ -243,7 +253,7 @@ func NewRepoCredsRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
// Print the repository credentials as table
func printRepoCredsTable(repos []appsv1.RepoCreds) {
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
fmt.Fprintf(w, "URL PATTERN\tUSERNAME\tSSH_CREDS\tTLS_CREDS\n")
fmt.Fprint(w, "URL PATTERN\tUSERNAME\tSSH_CREDS\tTLS_CREDS\n")
for _, r := range repos {
if r.Username == "" {
r.Username = "-"

View file

@ -541,7 +541,7 @@ func SetParameterOverrides(app *argoappv1.Application, parameters []string, inde
source.Helm.AddParameter(*newParam)
}
default:
log.Fatalf("Parameters can only be set against Helm applications")
log.Fatal("Parameters can only be set against Helm applications")
}
}

View file

@ -35,7 +35,7 @@ func TestReadAppSet(t *testing.T) {
var appSets []*argoprojiov1alpha1.ApplicationSet
err := readAppset([]byte(appSet), &appSets)
if err != nil {
t.Logf("Failed reading appset file")
t.Log("Failed reading appset file")
}
assert.Len(t, appSets, 1)
}

View file

@ -8,26 +8,31 @@ import (
)
type RepoOptions struct {
Repo appsv1.Repository
Upsert bool
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
InsecureOCIForceHTTP bool
InsecureIgnoreHostKey bool
InsecureSkipServerVerification bool
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
EnableLfs bool
EnableOci bool
GithubAppId int64
GithubAppInstallationId int64
GithubAppPrivateKeyPath string
GitHubAppEnterpriseBaseURL string
Proxy string
NoProxy string
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
Repo appsv1.Repository
Upsert bool
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
InsecureOCIForceHTTP bool
InsecureIgnoreHostKey bool
InsecureSkipServerVerification bool
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
EnableLfs bool
EnableOci bool
GithubAppId int64
GithubAppInstallationId int64
GithubAppPrivateKeyPath string
GitHubAppEnterpriseBaseURL string
Proxy string
NoProxy string
GCPServiceAccountKeyPath string
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
UseAzureWorkloadIdentity bool
Depth int64
WebhookManifestCacheWarmDisabled bool
AzureServicePrincipalTenantId string
AzureServicePrincipalClientId string
AzureServicePrincipalClientSecret string
AzureActiveDirectoryEndpoint string
}
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
@ -55,4 +60,9 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
command.Flags().BoolVar(&opts.WebhookManifestCacheWarmDisabled, "webhook-manifest-cache-warm-disabled", false, "disable manifest cache warming during webhook processing for this repository (recommended for large monorepos with plain YAML manifests)")
command.Flags().StringVar(&opts.AzureServicePrincipalTenantId, "azure-service-principal-tenant-id", "", "tenant id of the Azure Service Principal")
command.Flags().StringVar(&opts.AzureServicePrincipalClientId, "azure-service-principal-client-id", "", "client id of the Azure Service Principal")
command.Flags().StringVar(&opts.AzureServicePrincipalClientSecret, "azure-service-principal-client-secret", "", "client secret of the Azure Service Principal")
command.Flags().StringVar(&opts.AzureActiveDirectoryEndpoint, "azure-active-directory-endpoint", "", "Active Directory endpoint when not using default Azure public cloud (e.g. https://login.microsoftonline.de)")
}

View file

@ -10,7 +10,7 @@ import (
"github.com/Masterminds/sprig/v3"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"go.yaml.in/yaml/v3"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
@ -102,9 +102,6 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
}
}
// if no manifest changes then skip commit
if !atleastOneManifestChanged {
return false, nil
}
return atleastOneManifestChanged, nil
}
@ -140,11 +137,13 @@ func writeReadme(root *os.Root, dirPath string, metadata hydrator.HydratorCommit
if err != nil && !os.IsExist(err) {
return fmt.Errorf("failed to create README file: %w", err)
}
defer func() {
err := readmeFile.Close()
if err != nil {
log.WithError(err).Error("failed to close README file")
}
}()
err = readmeTemplate.Execute(readmeFile, metadata)
closeErr := readmeFile.Close()
if closeErr != nil {
log.WithError(closeErr).Error("failed to close README file")
}
if err != nil {
return fmt.Errorf("failed to execute readme template: %w", err)
}

View file

@ -137,6 +137,9 @@ const (
ChangePasswordSSOTokenMaxAge = time.Minute * 5
// GithubAppCredsExpirationDuration is the default time used to cache the GitHub app credentials
GithubAppCredsExpirationDuration = time.Minute * 60
// AzureServicePrincipalCredsExpirationDuration is the default time used to cache the Azure service principal credentials
// SP tokens are valid for 60 minutes, so cache for 59 minutes to avoid issues with token expiration when taking the cleanup interval of 1 minute into account
AzureServicePrincipalCredsExpirationDuration = time.Minute * 59
// PasswordPatten is the default password patten
PasswordPatten = `^.{8,32}$`
@ -297,6 +300,8 @@ const (
EnvEnableGRPCTimeHistogramEnv = "ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM"
// EnvGithubAppCredsExpirationDuration controls the caching of Github app credentials. This value is in minutes (default: 60)
EnvGithubAppCredsExpirationDuration = "ARGOCD_GITHUB_APP_CREDS_EXPIRATION_DURATION"
// EnvAzureServicePrincipalCredsExpirationDuration controls the caching of Azure service principal credentials. This value is in minutes (default: 59). Any value greater than 59 will be set to 59 minutes
EnvAzureServicePrincipalCredsExpirationDuration = "ARGOCD_AZURE_SERVICE_PRINCIPAL_CREDS_EXPIRATION_DURATION"
// EnvHelmIndexCacheDuration controls how the helm repository index file is cached for (default: 0)
EnvHelmIndexCacheDuration = "ARGOCD_HELM_INDEX_CACHE_DURATION"
// EnvAppConfigPath allows to override the configuration path for repo server

View file

@ -7,7 +7,6 @@ import (
"fmt"
"maps"
"math"
"math/rand"
"net/http"
"reflect"
"runtime/debug"
@ -27,6 +26,7 @@ import (
log "github.com/sirupsen/logrus"
"golang.org/x/sync/semaphore"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
@ -125,7 +125,6 @@ type ApplicationController struct {
stateCache statecache.LiveStateCache
statusRefreshTimeout time.Duration
statusHardRefreshTimeout time.Duration
statusRefreshJitter time.Duration
selfHealTimeout time.Duration
selfHealBackoff *wait.Backoff
syncTimeout time.Duration
@ -202,7 +201,6 @@ func NewApplicationController(
db: db,
statusRefreshTimeout: appResyncPeriod,
statusHardRefreshTimeout: appHardResyncPeriod,
statusRefreshJitter: appResyncJitter,
refreshRequestedApps: make(map[string]CompareWith),
refreshRequestedAppsMutex: &sync.Mutex{},
auditLogger: argo.NewAuditLogger(kubeClientset, namespace, common.CommandApplicationController, enableK8sEvent),
@ -1016,17 +1014,54 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
return processNext
}
var app *appv1.Application
var logCtx *log.Entry
if !exists {
// This happens after app was deleted, but the work queue still had an entry for it.
return processNext
parts := strings.Split(appKey, "/")
if len(parts) != 2 {
log.WithField("appkey", appKey).Warn("Unexpected appKey format, expected namespace/name")
return processNext
}
appNamespace, appName := parts[0], parts[1]
freshApp, apiErr := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(appNamespace).Get(context.Background(), appName, metav1.GetOptions{})
if apiErr != nil {
if apierrors.IsNotFound(apiErr) {
return processNext
}
log.WithField("appkey", appKey).WithError(apiErr).Error("Failed to retrieve application from API server")
return processNext
}
if freshApp.Operation == nil {
return processNext
}
app = freshApp
logCtx = log.WithFields(applog.GetAppLogFields(app))
} else {
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
}
app = origApp.DeepCopy()
logCtx = log.WithFields(applog.GetAppLogFields(app))
if app.Operation != nil {
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
if !apierrors.IsNotFound(err) {
logCtx.WithError(err).Error("Failed to retrieve latest application state")
}
return processNext
}
if freshApp.Operation == nil {
return processNext
}
app = freshApp
}
}
origApp, ok := obj.(*appv1.Application)
if !ok {
log.WithField("appkey", appKey).Warn("Key in index is not an application")
return processNext
}
app := origApp.DeepCopy()
logCtx := log.WithFields(applog.GetAppLogFields(app))
ts := stats.NewTimingStats()
defer func() {
for k, v := range ts.Timings() {
@ -1035,18 +1070,6 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
logCtx.Debug("Finished processing app operation queue item")
}()
if app.Operation != nil {
// If we get here, we are about to process an operation, but we cannot rely on informer since it might have stale data.
// So always retrieve the latest version to ensure it is not stale to avoid unnecessary syncing.
// We cannot rely on informer since applications might be updated by both application controller and api server.
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
if err != nil {
logCtx.WithError(err).Error("Failed to retrieve latest application state")
return processNext
}
app = freshApp
}
ts.AddCheckpoint("get_fresh_app_ms")
if app.Operation != nil {
@ -1773,7 +1796,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
}
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
return processNext
}
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
@ -1787,7 +1810,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
if hasErrors {
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
app.Status.Health.Status = health.HealthStatusUnknown
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
logCtx.WithError(err).Warn("failed to set app resource tree")
@ -1851,7 +1874,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
logCtx = logCtx.WithField(k, v.Milliseconds())
}
ctrl.normalizeApplication(origApp, app)
ctrl.normalizeApplication(app)
ts.AddCheckpoint("normalize_application_ms")
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
@ -1862,7 +1885,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
app.Status.Summary = tree.GetSummary(app)
}
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false)
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false, nil)
if canSync {
syncErrCond, opDuration := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionsMayHaveChanges)
setOpDuration = opDuration
@ -1928,7 +1951,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
}
}
ts.AddCheckpoint("process_finalizers_ms")
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
// This is a partly a duplicate of patch_ms, but more descriptive and allows to have measurement for the next step.
ts.AddCheckpoint("persist_app_status_ms")
return processNext
@ -2090,7 +2113,8 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
}
// normalizeApplication normalizes an application.spec and additionally persists updates if it changed
func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Application) {
func (ctrl *ApplicationController) normalizeApplication(app *appv1.Application) {
orig := app.DeepCopy()
app.Spec = *argo.NormalizeApplicationSpec(&app.Spec)
logCtx := log.WithFields(applog.GetAppLogFields(app))
@ -2124,8 +2148,17 @@ func createMergePatch(orig, newV any) ([]byte, bool, error) {
return patch, string(patch) != "{}", nil
}
// persistAppStatus persists updates to application status. If no changes were made, it is a no-op
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) (patchDuration time.Duration) {
// persistReconciliationStatus persists updates to application status and consumes the refresh annotation.
func (ctrl *ApplicationController) persistReconciliationStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) time.Duration {
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyRefresh)
return ctrl.persistAppStatus(orig, newStatus, newAnnotations)
}
// persistAppStatus persists updates to application status and optionally updates annotations.
// If no changes were made, it is a no-op
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus, newAnnotations map[string]string) (patchDuration time.Duration) {
logCtx := log.WithFields(applog.GetAppLogFields(orig))
if orig.Status.Sync.Status != newStatus.Sync.Status {
message := fmt.Sprintf("Updated sync status: %s -> %s", orig.Status.Sync.Status, newStatus.Sync.Status)
@ -2143,13 +2176,6 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
// make sure the last transition time is the same and populated if the health is the same
newStatus.Health.LastTransitionTime = orig.Status.Health.LastTransitionTime
}
var newAnnotations map[string]string
if orig.GetAnnotations() != nil {
newAnnotations = make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyRefresh)
delete(newAnnotations, appv1.AnnotationKeyHydrate)
}
patch, modified, err := createMergePatch(
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
@ -2319,7 +2345,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
ctrl.writeBackToInformer(updatedApp)
ts.AddCheckpoint("write_back_to_informer_ms")
message := fmt.Sprintf("Initiated automated sync to %s", desiredRevisions)
message := fmt.Sprintf("Initiated automated sync to '%s'", strings.Join(desiredRevisions, ", "))
ctrl.logAppEvent(context.TODO(), app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: corev1.EventTypeNormal}, message)
logCtx.Info(message)
return nil, setOpTime
@ -2438,6 +2464,29 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
return ctrl.clusterSharding.IsManagedCluster(destCluster)
}
func operationChanged(oldApp, newApp *appv1.Application) bool {
return (oldApp.Operation == nil && newApp.Operation != nil) ||
(oldApp.Operation != nil && newApp.Operation != nil && !equality.Semantic.DeepEqual(oldApp.Operation, newApp.Operation))
}
func deletionTimestampChanged(oldApp, newApp *appv1.Application) bool {
return (oldApp.DeletionTimestamp == nil && newApp.DeletionTimestamp != nil) ||
(oldApp.DeletionTimestamp != nil && newApp.DeletionTimestamp != nil && !oldApp.DeletionTimestamp.Equal(newApp.DeletionTimestamp))
}
func isStatusOnlyUpdate(oldApp, newApp *appv1.Application) bool {
if !equality.Semantic.DeepEqual(oldApp.Spec, newApp.Spec) {
return false
}
if operationChanged(oldApp, newApp) {
return false
}
if deletionTimestampChanged(oldApp, newApp) || newApp.DeletionTimestamp != nil {
return false
}
return true
}
func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.SharedIndexInformer, applisters.ApplicationLister) {
watchNamespace := ctrl.namespace
// If we have at least one additional namespace configured, we need to
@ -2530,34 +2579,59 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
}
},
UpdateFunc: func(old, new any) {
if !ctrl.canProcessApp(new) {
return
}
key, err := cache.MetaNamespaceKeyFunc(new)
if err != nil {
return
}
oldApp, oldOK := old.(*appv1.Application)
newApp, newOK := new.(*appv1.Application)
if !ctrl.canProcessApp(new) {
return
}
if newOK && newApp.Operation != nil {
ctrl.appOperationQueue.AddRateLimited(key)
}
var compareWith *CompareWith
var delay *time.Duration
oldApp, oldOK := old.(*appv1.Application)
newApp, newOK := new.(*appv1.Application)
if oldOK && newOK {
if oldApp.ResourceVersion == newApp.ResourceVersion {
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
return
}
if isStatusOnlyUpdate(oldApp, newApp) {
oldAnnotations := oldApp.GetAnnotations()
newAnnotations := newApp.GetAnnotations()
refreshAdded := (oldAnnotations == nil || oldAnnotations[appv1.AnnotationKeyRefresh] == "") &&
(newAnnotations != nil && newAnnotations[appv1.AnnotationKeyRefresh] != "")
hydrateAdded := (oldAnnotations == nil || oldAnnotations[appv1.AnnotationKeyHydrate] == "") &&
(newAnnotations != nil && newAnnotations[appv1.AnnotationKeyHydrate] != "")
if !refreshAdded && !hydrateAdded {
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
return
}
}
if automatedSyncEnabled(oldApp, newApp) {
log.WithFields(applog.GetAppLogFields(newApp)).Info("Enabled automated sync")
compareWith = CompareWithLatest.Pointer()
}
if ctrl.statusRefreshJitter != 0 && oldApp.ResourceVersion == newApp.ResourceVersion {
// Handler is refreshing the apps, add a random jitter to spread the load and avoid spikes
jitter := time.Duration(float64(ctrl.statusRefreshJitter) * rand.Float64())
delay = &jitter
}
}
ctrl.requestAppRefresh(newApp.QualifiedName(), compareWith, delay)
if !newOK || (delay != nil && *delay != time.Duration(0)) {
if !newOK {
ctrl.appOperationQueue.AddRateLimited(key)
}
if ctrl.hydrator != nil {
@ -2570,7 +2644,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
return
}
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
// Key function.
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
// for deletes, we immediately add to the refresh queue
@ -2689,7 +2763,7 @@ func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config,
if !impersonationEnabled {
return nil
}
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
user, err := settings_util.DeriveServiceAccountToImpersonate(proj, app, destCluster)
if err != nil {
return fmt.Errorf("error deriving service account to impersonate: %w", err)
}

View file

@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"maps"
"strconv"
"testing"
"time"
@ -14,6 +15,7 @@ import (
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube/kubetest"
"github.com/sirupsen/logrus"
"github.com/stretchr/testify/require"
"k8s.io/apimachinery/pkg/api/equality"
"k8s.io/apimachinery/pkg/api/resource"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/util/wait"
@ -662,8 +664,7 @@ func TestAutoSync(t *testing.T) {
func TestAutoSyncEnabledSetToTrue(t *testing.T) {
app := newFakeApp()
enable := true
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(true)}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
@ -789,8 +790,7 @@ func TestSkipAutoSync(t *testing.T) {
// Verify we skip when auto-sync is disabled
t.Run("AutoSyncEnableFieldIsSetFalse", func(t *testing.T) {
app := newFakeApp()
enable := false
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(false)}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
syncStatus := v1alpha1.SyncStatus{
Status: v1alpha1.SyncStatusCodeOutOfSync,
@ -1993,6 +1993,252 @@ func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
assert.Equal(t, CompareWithLatest, compareWith)
}
func TestApplicationInformerUpdateFunc(t *testing.T) {
// Test that UpdateFunc correctly handles:
// 1. Status-only updates (no annotation) - should NOT trigger refresh
// 2. Status-only updates WITH refresh annotation - should trigger refresh
// 3. Spec changes - should trigger refresh
// 4. Informer resync (same ResourceVersion) - should NOT trigger refresh
app := newFakeApp()
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
proj := defaultProj.DeepCopy()
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
simulateUpdateFunc := func(oldApp, newApp *v1alpha1.Application) {
if !ctrl.canProcessApp(newApp) {
return
}
key, err := cache.MetaNamespaceKeyFunc(newApp)
if err != nil {
return
}
var compareWith *CompareWith
var delay *time.Duration
oldOK := oldApp != nil
newOK := newApp != nil
if oldOK && newOK {
if oldApp.ResourceVersion == newApp.ResourceVersion {
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
return
}
// Check if operation was added or changed - always process operations
operationChanged := (oldApp.Operation == nil && newApp.Operation != nil) ||
(oldApp.Operation != nil && newApp.Operation != nil && !equality.Semantic.DeepEqual(oldApp.Operation, newApp.Operation))
deletionTimestampChanged := (oldApp.DeletionTimestamp == nil && newApp.DeletionTimestamp != nil) ||
(oldApp.DeletionTimestamp != nil && newApp.DeletionTimestamp != nil && !oldApp.DeletionTimestamp.Equal(newApp.DeletionTimestamp))
appBeingDeleted := newApp.DeletionTimestamp != nil
if equality.Semantic.DeepEqual(oldApp.Spec, newApp.Spec) && !operationChanged && !deletionTimestampChanged && !appBeingDeleted {
oldAnnotations := oldApp.GetAnnotations()
newAnnotations := newApp.GetAnnotations()
refreshAdded := (oldAnnotations == nil || oldAnnotations[v1alpha1.AnnotationKeyRefresh] == "") &&
(newAnnotations != nil && newAnnotations[v1alpha1.AnnotationKeyRefresh] != "")
hydrateAdded := (oldAnnotations == nil || oldAnnotations[v1alpha1.AnnotationKeyHydrate] == "") &&
(newAnnotations != nil && newAnnotations[v1alpha1.AnnotationKeyHydrate] != "")
if !refreshAdded && !hydrateAdded {
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
return
}
}
if automatedSyncEnabled(oldApp, newApp) {
compareWith = CompareWithLatest.Pointer()
}
if compareWith == nil {
compareWith = CompareWithRecent.Pointer()
}
}
ctrl.requestAppRefresh(newApp.QualifiedName(), compareWith, delay)
if !newOK {
ctrl.appOperationQueue.AddRateLimited(key)
}
if ctrl.hydrator != nil {
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
}
ctrl.clusterSharding.UpdateApp(newApp)
}
checkRefreshRequested := func(appName string, shouldBeRequested bool, msg string) {
key := ctrl.toAppKey(appName)
ctrl.refreshRequestedAppsMutex.Lock()
_, isRequested := ctrl.refreshRequestedApps[key]
ctrl.refreshRequestedAppsMutex.Unlock()
assert.Equal(t, shouldBeRequested, isRequested, "%s: Refresh request state mismatch for app %s (key: %s)", msg, appName, key)
}
t.Run("Status-only update without annotation should NOT trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "1"
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "2"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), false, "Status-only update without annotation")
})
t.Run("Status-only update WITH refresh annotation SHOULD trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "3"
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "4"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
if newApp.Annotations == nil {
newApp.Annotations = make(map[string]string)
}
newApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH refresh annotation")
})
t.Run("Status-only update WITH hydrate annotation SHOULD trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "5"
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "6"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
if newApp.Annotations == nil {
newApp.Annotations = make(map[string]string)
}
newApp.Annotations[v1alpha1.AnnotationKeyHydrate] = "true"
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH hydrate annotation")
})
t.Run("Status-only update WITH both refresh and hydrate annotations SHOULD trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "7"
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "8"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
if newApp.Annotations == nil {
newApp.Annotations = make(map[string]string)
}
newApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
newApp.Annotations[v1alpha1.AnnotationKeyHydrate] = "true"
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH both refresh and hydrate annotations")
})
t.Run("Status-only update with annotation REMOVAL should NOT trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "9"
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
if oldApp.Annotations == nil {
oldApp.Annotations = make(map[string]string)
}
oldApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "10"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
delete(newApp.Annotations, v1alpha1.AnnotationKeyRefresh)
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), false, "Status-only update with annotation REMOVAL")
})
t.Run("Spec change SHOULD trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "11"
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "12"
newApp.Spec.Destination.Namespace = "different-namespace"
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), true, "Spec change")
})
t.Run("Informer resync (same ResourceVersion) should NOT trigger refresh", func(_ *testing.T) {
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "13"
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "13"
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), false, "Informer resync")
})
t.Run("DeletionTimestamp added SHOULD trigger refresh", func(_ *testing.T) {
// Reset refresh state
ctrl.refreshRequestedAppsMutex.Lock()
ctrl.refreshRequestedApps = make(map[string]CompareWith)
ctrl.refreshRequestedAppsMutex.Unlock()
oldApp := app.DeepCopy()
oldApp.ResourceVersion = "14"
oldApp.DeletionTimestamp = nil
newApp := oldApp.DeepCopy()
newApp.ResourceVersion = "15"
newApp.DeletionTimestamp = &metav1.Time{Time: time.Now()}
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
simulateUpdateFunc(oldApp, newApp)
checkRefreshRequested(app.QualifiedName(), true, "DeletionTimestamp added")
})
}
func TestRefreshAppConditions(t *testing.T) {
defaultProj := v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
@ -3359,3 +3605,82 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
})
}
}
func TestPersistAppStatus_AnnotationManagement(t *testing.T) {
t.Run("persistReconciliationStatus deletes only refresh annotation", func(t *testing.T) {
app := newFakeApp()
app.Annotations = map[string]string{
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
"other-annotation": "other-value",
}
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
app.Status.Health.Status = health.HealthStatusHealthy
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
origApp := app.DeepCopy()
newStatus := app.Status.DeepCopy()
ctrl.persistReconciliationStatus(origApp, newStatus)
// Verify the patch was created correctly
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
// Refresh annotation should be deleted
_, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
assert.False(t, hasRefresh, "refresh annotation should be deleted")
// Hydrate annotation should still exist
hydrateValue, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
assert.True(t, hasHydrate, "hydrate annotation should still exist")
assert.Equal(t, string(v1alpha1.HydrateTypeNormal), hydrateValue)
// Other annotations should be preserved
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
assert.True(t, hasOther, "other annotations should be preserved")
assert.Equal(t, "other-value", otherValue)
})
t.Run("persistAppStatus with explicit annotations", func(t *testing.T) {
app := newFakeApp()
app.Annotations = map[string]string{
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
"other-annotation": "other-value",
}
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
app.Status.Health.Status = health.HealthStatusHealthy
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
origApp := app.DeepCopy()
newStatus := app.Status.DeepCopy()
// Create annotations that delete hydrate but keep refresh
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, origApp.Annotations)
delete(newAnnotations, v1alpha1.AnnotationKeyHydrate)
ctrl.persistAppStatus(origApp, newStatus, newAnnotations)
// Verify the patch was created correctly
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
require.NoError(t, err)
// Hydrate annotation should be deleted
_, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
assert.False(t, hasHydrate, "hydrate annotation should be deleted")
// Refresh annotation should still exist
refreshValue, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
assert.True(t, hasRefresh, "refresh annotation should still exist")
assert.Equal(t, string(v1alpha1.RefreshTypeNormal), refreshValue)
// Other annotations should be preserved
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
assert.True(t, hasOther, "other annotations should be preserved")
assert.Equal(t, "other-value", otherValue)
})
}

View file

@ -132,11 +132,11 @@ func (c *clusterInfoUpdater) getUpdatedClusterInfo(ctx context.Context, apps []*
continue
}
}
destCluster, err := argo.GetDestinationCluster(ctx, a.Spec.Destination, c.db)
destServer, err := argo.GetDestinationServer(ctx, a.Spec.Destination, c.db)
if err != nil {
continue
}
if destCluster.Server == cluster.Server {
if destServer == cluster.Server {
appCount++
}
}

View file

@ -101,6 +101,121 @@ func TestClusterSecretUpdater(t *testing.T) {
}
}
func TestGetUpdatedClusterInfo_AppCount(t *testing.T) {
const fakeNamespace = "fake-ns"
const clusterServer = "https://prod.example.com"
const clusterName = "prod"
emptyArgoCDConfigMap := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
Namespace: fakeNamespace,
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
},
Data: map[string]string{},
}
argoCDSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDSecretName,
Namespace: fakeNamespace,
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
},
Data: map[string][]byte{"admin.password": nil, "server.secretkey": nil},
}
clusterSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: "prod-cluster",
Namespace: fakeNamespace,
Labels: map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeCluster},
Annotations: map[string]string{
common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD,
},
},
Data: map[string][]byte{
"name": []byte(clusterName),
"server": []byte(clusterServer),
"config": []byte("{}"),
},
}
kubeclientset := fake.NewClientset(emptyArgoCDConfigMap, argoCDSecret, clusterSecret)
settingsManager := settings.NewSettingsManager(t.Context(), kubeclientset, fakeNamespace)
argoDB := db.NewDB(fakeNamespace, settingsManager, kubeclientset)
apps := []*v1alpha1.Application{
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Name: clusterName}}},
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Server: clusterServer}}},
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Server: "https://other.example.com"}}},
}
updater := &clusterInfoUpdater{db: argoDB, namespace: fakeNamespace}
cluster := v1alpha1.Cluster{Server: clusterServer}
info := updater.getUpdatedClusterInfo(t.Context(), apps, cluster, nil, metav1.Now())
assert.Equal(t, int64(2), info.ApplicationsCount)
}
func TestGetUpdatedClusterInfo_AmbiguousName(t *testing.T) {
const fakeNamespace = "fake-ns"
const clusterServer = "https://prod.example.com"
const clusterName = "prod"
emptyArgoCDConfigMap := &corev1.ConfigMap{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDConfigMapName,
Namespace: fakeNamespace,
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
},
Data: map[string]string{},
}
argoCDSecret := &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: common.ArgoCDSecretName,
Namespace: fakeNamespace,
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
},
Data: map[string][]byte{"admin.password": nil, "server.secretkey": nil},
}
makeClusterSecret := func(secretName, server string) *corev1.Secret {
return &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
Namespace: fakeNamespace,
Labels: map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeCluster},
Annotations: map[string]string{
common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD,
},
},
Data: map[string][]byte{
"name": []byte(clusterName),
"server": []byte(server),
"config": []byte("{}"),
},
}
}
// Two secrets share the same cluster name
kubeclientset := fake.NewClientset(
emptyArgoCDConfigMap, argoCDSecret,
makeClusterSecret("prod-cluster-1", clusterServer),
makeClusterSecret("prod-cluster-2", "https://prod2.example.com"),
)
settingsManager := settings.NewSettingsManager(t.Context(), kubeclientset, fakeNamespace)
argoDB := db.NewDB(fakeNamespace, settingsManager, kubeclientset)
apps := []*v1alpha1.Application{
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Name: clusterName}}},
}
updater := &clusterInfoUpdater{db: argoDB, namespace: fakeNamespace}
cluster := v1alpha1.Cluster{Server: clusterServer}
info := updater.getUpdatedClusterInfo(t.Context(), apps, cluster, nil, metav1.Now())
assert.Equal(t, int64(0), info.ApplicationsCount, "ambiguous name should not count app")
}
func TestUpdateClusterLabels(t *testing.T) {
shouldNotBeInvoked := func(_ context.Context, _ *v1alpha1.Cluster) (*v1alpha1.Cluster, error) {
shouldNotHappen := errors.New("if an error happens here, something's wrong")

View file

@ -11,6 +11,7 @@ import (
"github.com/argoproj/argo-cd/gitops-engine/pkg/sync/hook"
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
log "github.com/sirupsen/logrus"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/rest"
@ -76,6 +77,21 @@ func isPostDeleteHook(obj *unstructured.Unstructured) bool {
return isHookOfType(obj, PostDeleteHookType)
}
// hasGitOpsEngineSyncPhaseHook is true when gitops-engine would run the resource during a sync
// phase (PreSync, Sync, PostSync, SyncFail). PreDelete/PostDelete are not sync phases;
// without this check, state reconciliation drops such resources
// entirely because isPreDeleteHook/isPostDeleteHook match any comma-separated value.
// HookTypeSkip is omitted as it is not a sync phase.
func hasGitOpsEngineSyncPhaseHook(obj *unstructured.Unstructured) bool {
for _, t := range hook.Types(obj) {
switch t {
case common.HookTypePreSync, common.HookTypeSync, common.HookTypePostSync, common.HookTypeSyncFail:
return true
}
}
return false
}
// executeHooks is a generic function to execute hooks of a specified type
func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Application, proj *appv1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
@ -88,6 +104,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
revisions = append(revisions, src.TargetRevision)
}
// Fetch target objects from Git to know which hooks should exist
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(context.Background(), app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
if err != nil {
return false, err
@ -110,14 +127,14 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
if !isHookOfType(obj, hookType) {
continue
}
if runningHook := runningHooks[kube.GetResourceKey(obj)]; runningHook == nil {
if _, alreadyExists := runningHooks[kube.GetResourceKey(obj)]; !alreadyExists {
expectedHook[kube.GetResourceKey(obj)] = obj
}
}
// Create hooks that don't exist yet
createdCnt := 0
for _, obj := range expectedHook {
for key, obj := range expectedHook {
// Add app instance label so the hook can be tracked and cleaned up
labels := obj.GetLabels()
if labels == nil {
@ -126,8 +143,13 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
labels[appLabelKey] = app.InstanceName(ctrl.namespace)
obj.SetLabels(labels)
logCtx.Infof("Creating %s hook resource: %s", hookType, key)
_, err = ctrl.kubectl.CreateResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), obj, metav1.CreateOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
logCtx.Warnf("Hook resource %s already exists, skipping", key)
continue
}
return false, err
}
createdCnt++
@ -148,7 +170,8 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
progressingHooksCount := 0
var failedHooks []string
var failedHookObjects []*unstructured.Unstructured
for _, obj := range runningHooks {
for key, obj := range runningHooks {
hookHealth, err := health.GetResourceHealth(obj, healthOverrides)
if err != nil {
return false, err
@ -165,12 +188,17 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
Status: health.HealthStatusHealthy,
}
}
switch hookHealth.Status {
case health.HealthStatusProgressing:
logCtx.Debugf("Hook %s is progressing", key)
progressingHooksCount++
case health.HealthStatusDegraded:
logCtx.Warnf("Hook %s is degraded: %s", key, hookHealth.Message)
failedHooks = append(failedHooks, fmt.Sprintf("%s/%s", obj.GetNamespace(), obj.GetName()))
failedHookObjects = append(failedHookObjects, obj)
case health.HealthStatusHealthy:
logCtx.Debugf("Hook %s is healthy", key)
}
}
@ -179,7 +207,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
logCtx.Infof("Deleting %d failed %s hook(s) to allow retry", len(failedHookObjects), hookType)
for _, obj := range failedHookObjects {
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
if err != nil {
if err != nil && !apierrors.IsNotFound(err) {
logCtx.WithError(err).Warnf("Failed to delete failed hook %s/%s", obj.GetNamespace(), obj.GetName())
}
}
@ -226,6 +254,10 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
hooks = append(hooks, obj)
}
if len(hooks) == 0 {
return true, nil
}
// Process hooks for deletion
for _, obj := range hooks {
deletePolicies := hook.DeletePolicies(obj)
@ -252,7 +284,7 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
}
logCtx.Infof("Deleting %s hook %s/%s", hookType, obj.GetNamespace(), obj.GetName())
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
if err != nil {
if err != nil && !apierrors.IsNotFound(err) {
return false, err
}
}

View file

@ -3,8 +3,10 @@ package controller
import (
"testing"
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
"github.com/stretchr/testify/assert"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
)
func TestIsHookOfType(t *testing.T) {
@ -192,6 +194,92 @@ func TestIsPostDeleteHook(t *testing.T) {
}
}
// TestPartitionTargetObjsForSync covers partitionTargetObjsForSync in state.go.
func TestPartitionTargetObjsForSync(t *testing.T) {
newObj := func(name string, annot map[string]string) *unstructured.Unstructured {
u := &unstructured.Unstructured{}
u.SetName(name)
u.SetAnnotations(annot)
return u
}
tests := []struct {
name string
in []*unstructured.Unstructured
wantNames []string
wantPreDelete bool
wantPostDelete bool
}{
{
name: "PostSync with PreDelete and PostDelete in same annotation stays in sync set",
in: []*unstructured.Unstructured{
newObj("combined", map[string]string{"argocd.argoproj.io/hook": "PostSync,PreDelete,PostDelete"}),
},
wantNames: []string{"combined"},
wantPreDelete: true,
wantPostDelete: true,
},
{
name: "PreDelete-only manifest excluded from sync",
in: []*unstructured.Unstructured{
newObj("pre-del", map[string]string{"argocd.argoproj.io/hook": "PreDelete"}),
},
wantNames: nil,
wantPreDelete: true,
wantPostDelete: false,
},
{
name: "PostDelete-only manifest excluded from sync",
in: []*unstructured.Unstructured{
newObj("post-del", map[string]string{"argocd.argoproj.io/hook": "PostDelete"}),
},
wantNames: nil,
wantPreDelete: false,
wantPostDelete: true,
},
{
name: "Helm pre-delete only excluded from sync",
in: []*unstructured.Unstructured{
newObj("helm-pre-del", map[string]string{"helm.sh/hook": "pre-delete"}),
},
wantNames: nil,
wantPreDelete: true,
wantPostDelete: false,
},
{
name: "Helm pre-install with pre-delete stays in sync (sync-phase hook wins)",
in: []*unstructured.Unstructured{
newObj("helm-mixed", map[string]string{"helm.sh/hook": "pre-install,pre-delete"}),
},
wantNames: []string{"helm-mixed"},
wantPreDelete: true,
wantPostDelete: false,
},
{
name: "Non-hook resource unchanged",
in: []*unstructured.Unstructured{
newObj("pod", map[string]string{"app": "x"}),
},
wantNames: []string{"pod"},
wantPreDelete: false,
wantPostDelete: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, hasPre, hasPost := partitionTargetObjsForSync(tt.in)
var names []string
for _, o := range got {
names = append(names, o.GetName())
}
assert.Equal(t, tt.wantNames, names)
assert.Equal(t, tt.wantPreDelete, hasPre, "hasPreDeleteHooks")
assert.Equal(t, tt.wantPostDelete, hasPost, "hasPostDeleteHooks")
})
}
}
func TestMultiHookOfType(t *testing.T) {
tests := []struct {
name string
@ -226,3 +314,174 @@ func TestMultiHookOfType(t *testing.T) {
})
}
}
func TestExecuteHooksAlreadyExistsLogic(t *testing.T) {
newObj := func(name string, annot map[string]string) *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "batch", Version: "v1", Kind: "Job"})
obj.SetName(name)
obj.SetNamespace("default")
obj.SetAnnotations(annot)
return obj
}
tests := []struct {
name string
hookType []HookType
targetAnnot map[string]string
liveAnnot map[string]string // nil -> object doesn't exist in cluster
expectCreated bool
}{
// PRE DELETE TESTS
{
name: "PreDelete (argocd): Not in cluster - should be created",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PreDelete (helm): Not in cluster - should be created",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PreDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
expectCreated: false,
},
{
name: "PreDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
expectCreated: false,
},
{
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
expectCreated: false,
},
{
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PreDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
expectCreated: false,
},
// POST DELETE TESTS
{
name: "PostDelete (argocd): Not in cluster - should be created",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PostDelete (helm): Not in cluster - should be created",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
liveAnnot: nil,
expectCreated: true,
},
{
name: "PostDelete (argocd): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
{
name: "PostDelete (helm): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): Already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
expectCreated: false,
},
{
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
hookType: []HookType{PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
expectCreated: false,
},
// MULTI HOOK TESTS - SKIP LOGIC
{
name: "Multi-hook (argocd): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
expectCreated: false,
},
{
name: "Multi-hook (helm): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
targetAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
liveAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
expectCreated: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
targetObj := newObj("my-hook", tt.targetAnnot)
targetKey := kube.GetResourceKey(targetObj)
liveObjs := make(map[kube.ResourceKey]*unstructured.Unstructured)
if tt.liveAnnot != nil {
liveObjs[targetKey] = newObj("my-hook", tt.liveAnnot)
}
runningHooks := map[kube.ResourceKey]*unstructured.Unstructured{}
for key, obj := range liveObjs {
for _, hookType := range tt.hookType {
if isHookOfType(obj, hookType) {
runningHooks[key] = obj
}
}
}
expectedHooksToCreate := map[kube.ResourceKey]*unstructured.Unstructured{}
targets := []*unstructured.Unstructured{targetObj}
for _, obj := range targets {
for _, hookType := range tt.hookType {
if !isHookOfType(obj, hookType) {
continue
}
}
objKey := kube.GetResourceKey(obj)
if _, alreadyExists := runningHooks[objKey]; !alreadyExists {
expectedHooksToCreate[objKey] = obj
}
}
if tt.expectCreated {
assert.NotEmpty(t, expectedHooksToCreate, "Expected hook to be marked for creation")
} else {
assert.Empty(t, expectedHooksToCreate, "Expected hook to be skipped (already exists)")
}
})
}
}

View file

@ -60,8 +60,8 @@ type Dependencies interface {
// trigger a refresh after the application has been hydrated and a new commit has been pushed.
RequestAppRefresh(appName string, appNamespace string) error
// PersistAppHydratorStatus persists the application status for the source hydrator.
PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
// PersistHydrationStatus persists the application status for the source hydrator.
PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
// AddHydrationQueueItem adds a hydration queue item to the queue. This is used to trigger the hydration process for
// a group of applications which are hydrating to the same repo and target branch.
@ -123,9 +123,10 @@ func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
Phase: appv1.HydrateOperationPhaseHydrating,
SourceHydrator: *app.Spec.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
}
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
if needsHydration || needsRefresh {
logCtx.WithField("reason", reason).Info("Hydrating app")
@ -252,7 +253,7 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
HydratedSHA: hydratedSHA,
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
}
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
// Request a refresh since we pushed a new commit.
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
@ -274,7 +275,7 @@ func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
failedAt := metav1.Now()
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
}
// getAppsForHydrationKey returns the applications matching the hydration key.
@ -476,17 +477,9 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
//
// If the given target revision is empty, it uses the target revision from the app dry source spec.
func (h *Hydrator) getManifests(ctx context.Context, app *appv1.Application, targetRevision string, project *appv1.AppProject) (revision string, pathDetails *commitclient.PathDetails, err error) {
drySource := appv1.ApplicationSource{
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
Path: app.Spec.SourceHydrator.DrySource.Path,
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
Helm: app.Spec.SourceHydrator.DrySource.Helm,
Kustomize: app.Spec.SourceHydrator.DrySource.Kustomize,
Directory: app.Spec.SourceHydrator.DrySource.Directory,
Plugin: app.Spec.SourceHydrator.DrySource.Plugin,
}
drySource := app.Spec.SourceHydrator.GetDrySource()
if targetRevision == "" {
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
targetRevision = drySource.TargetRevision
}
// TODO: enable signature verification

View file

@ -394,7 +394,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
app.Status.SourceHydrator.CurrentOperation = nil
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
}).Return().Once()
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
@ -406,7 +406,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
require.NotNil(t, persistedStatus)
@ -433,6 +433,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
},
}
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
h := &Hydrator{
dependencies: d,
@ -442,7 +443,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
}
func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
@ -458,7 +459,7 @@ func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
h.ProcessAppHydrateQueueItem(app)
// Should not call anything
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
}
@ -476,14 +477,15 @@ func TestProcessAppHydrateQueueItem_HydrationNotNeeded(t *testing.T) {
},
}
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
h := &Hydrator{
dependencies: d,
statusRefreshTimeout: time.Minute,
}
h.ProcessAppHydrateQueueItem(app)
// Should not call anything
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
}
@ -504,7 +506,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@ -524,7 +526,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@ -548,7 +550,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@ -568,7 +570,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@ -593,7 +595,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus1 *v1alpha1.SourceHydratorStatus
var persistedStatus2 *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
switch orig.Name {
case app1.Name:
persistedStatus1 = newStatus
@ -615,7 +617,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
assert.Equal(t, "abc123", persistedStatus1.CurrentOperation.DrySHA)
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
}
@ -633,7 +635,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
// Expect setAppHydratorError to be called
var persistedStatus *v1alpha1.SourceHydratorStatus
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
persistedStatus = newStatus
}).Return().Once()
d.EXPECT().RequestAppRefresh(app.Name, app.Namespace).Return(nil).Once()
@ -650,7 +652,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
h.ProcessHydrationQueueItem(hydrationKey)
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
d.AssertCalled(t, "RequestAppRefresh", app.Name, app.Namespace)
assert.NotNil(t, persistedStatus)
assert.Equal(t, app.Status.SourceHydrator.CurrentOperation.StartedAt, persistedStatus.CurrentOperation.StartedAt)

View file

@ -525,25 +525,25 @@ func (_c *Dependencies_GetWriteCredentials_Call) RunAndReturn(run func(ctx conte
return _c
}
// PersistAppHydratorStatus provides a mock function for the type Dependencies
func (_mock *Dependencies) PersistAppHydratorStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
// PersistHydrationStatus provides a mock function for the type Dependencies
func (_mock *Dependencies) PersistHydrationStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
_mock.Called(orig, newStatus)
return
}
// Dependencies_PersistAppHydratorStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistAppHydratorStatus'
type Dependencies_PersistAppHydratorStatus_Call struct {
// Dependencies_PersistHydrationStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistHydrationStatus'
type Dependencies_PersistHydrationStatus_Call struct {
*mock.Call
}
// PersistAppHydratorStatus is a helper method to define mock.On call
// PersistHydrationStatus is a helper method to define mock.On call
// - orig *v1alpha1.Application
// - newStatus *v1alpha1.SourceHydratorStatus
func (_e *Dependencies_Expecter) PersistAppHydratorStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistAppHydratorStatus_Call {
return &Dependencies_PersistAppHydratorStatus_Call{Call: _e.mock.On("PersistAppHydratorStatus", orig, newStatus)}
func (_e *Dependencies_Expecter) PersistHydrationStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistHydrationStatus_Call {
return &Dependencies_PersistHydrationStatus_Call{Call: _e.mock.On("PersistHydrationStatus", orig, newStatus)}
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
func (_c *Dependencies_PersistHydrationStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
_c.Call.Run(func(args mock.Arguments) {
var arg0 *v1alpha1.Application
if args[0] != nil {
@ -561,12 +561,12 @@ func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha
return _c
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) Return() *Dependencies_PersistAppHydratorStatus_Call {
func (_c *Dependencies_PersistHydrationStatus_Call) Return() *Dependencies_PersistHydrationStatus_Call {
_c.Call.Return()
return _c
}
func (_c *Dependencies_PersistAppHydratorStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
func (_c *Dependencies_PersistHydrationStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
_c.Run(run)
return _c
}

View file

@ -3,6 +3,7 @@ package controller
import (
"context"
"fmt"
"maps"
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
@ -88,10 +89,13 @@ func (ctrl *ApplicationController) RequestAppRefresh(appName string, appNamespac
return nil
}
func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
func (ctrl *ApplicationController) PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
newAnnotations := make(map[string]string)
maps.Copy(newAnnotations, orig.GetAnnotations())
delete(newAnnotations, appv1.AnnotationKeyHydrate)
status := orig.Status.DeepCopy()
status.SourceHydrator = *newStatus
ctrl.persistAppStatus(orig, status)
ctrl.persistAppStatus(orig, status, newAnnotations)
}
func (ctrl *ApplicationController) AddHydrationQueueItem(key types.HydrationQueueKey) {

View file

@ -222,7 +222,10 @@ func createConsistentHashingWithBoundLoads(replicas int, getCluster clusterAcces
}
shardIndexedByCluster[c.ID], err = strconv.Atoi(clusterIndex)
if err != nil {
log.Errorf("Consistent Hashing was supposed to return a shard index but it returned %d", err)
log.Errorf("Failed to get shard index from consistent hashing, error=%v", err)
// No continue here: strconv.Atoi returns 0 on failure, so the cluster falls back to shard 0.
// This is intentional since shard 0 always exists (replicas > 0 is enforced by the caller),
// so the cluster remains reconciled rather than being silently dropped.
}
numApps, ok := appDistribution[c.Server]
if !ok {

View file

@ -41,18 +41,13 @@ import (
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
"github.com/argoproj/argo-cd/v3/util/db"
"github.com/argoproj/argo-cd/v3/util/env"
"github.com/argoproj/argo-cd/v3/util/gpg"
utilio "github.com/argoproj/argo-cd/v3/util/io"
"github.com/argoproj/argo-cd/v3/util/settings"
"github.com/argoproj/argo-cd/v3/util/stats"
)
var (
ErrCompareStateRepo = errors.New("failed to get repo objects")
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
)
var ErrCompareStateRepo = errors.New("failed to get repo objects")
type resourceInfoProviderStub struct{}
@ -75,7 +70,7 @@ type managedResource struct {
// AppStateManager defines methods which allow to compare application spec and actual application state.
type AppStateManager interface {
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
}
@ -247,63 +242,20 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
return nil, nil, false, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
}
syncedRevision := app.Status.Sync.Revision
if app.Spec.HasMultipleSources() {
if i < len(app.Status.Sync.Revisions) {
syncedRevision = app.Status.Sync.Revisions[i]
} else {
syncedRevision = ""
}
}
revision := revisions[i]
appNamespace := app.Spec.Destination.Namespace
apiVersions := argo.APIResourcesToStrings(apiResources, true)
updateRevisions := processManifestGeneratePathsEnabled &&
// updating revisions result is not required if automated sync is not enabled
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
// using updating revisions gains performance only if manifest generation is required.
// just reading pre-generated manifests is comparable to updating revisions time-wise
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
if updateRevisions && repo.Depth == 0 && syncedRevision != "" && !source.IsRef() && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" && (syncedRevision != revision || app.Spec.HasMultipleSources()) {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetSourceRefreshPaths(app, source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
ApplicationSource: &source,
KubeVersion: serverVersion,
ApiVersions: apiVersions,
TrackingMethod: trackingMethod,
RefSources: refSources,
SyncedRefSources: syncedRefSources,
HasMultipleSources: app.Spec.HasMultipleSources(),
InstallationID: installationID,
})
if err != nil {
return nil, nil, false, fmt.Errorf("failed to compare revisions for source %d of %d: %w", i+1, len(sources), err)
}
if updateRevisionResult.Changes {
revisionsMayHaveChanges = true
}
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
if updateRevisionResult.Revision != "" {
revision = updateRevisionResult.Revision
}
} else if !source.IsRef() {
// revisionsMayHaveChanges is set to true if at least one revision is not possible to be updated
// Evaluate if the revision has changes
resolvedRevision, hasChanges, err := m.evaluateRevisionChanges(ctx, repoClient, app, &source, i, repo, revision, refSources, syncedRefSources, noRevisionCache, appLabelKey, serverVersion, apiVersions, trackingMethod, installationID, keyManifestGenerateAnnotationExists, keyManifestGenerateAnnotationVal)
if err != nil {
return nil, nil, false, fmt.Errorf("failed to evaluate revision changes for source %d of %d: %w", i+1, len(sources), err)
}
if hasChanges {
revisionsMayHaveChanges = true
}
revision = resolvedRevision
repos := permittedHelmRepos
helmRepoCreds := permittedHelmCredentials
@ -344,7 +296,11 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
InstallationID: installationID,
})
if err != nil {
return nil, nil, false, fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
genErr := fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
if app.Spec.SourceHydrator != nil && app.Spec.SourceHydrator.HydrateTo != nil && strings.Contains(err.Error(), path.ErrMessageAppPathDoesNotExist) {
genErr = fmt.Errorf("%w - waiting for an external process to update %s from %s", genErr, app.Spec.SourceHydrator.SyncSource.TargetBranch, app.Spec.SourceHydrator.HydrateTo.TargetBranch)
}
return nil, nil, false, genErr
}
targetObj, err := unmarshalManifests(manifestInfo.Manifests)
@ -366,37 +322,84 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
return targetObjs, manifestInfos, revisionsMayHaveChanges, nil
}
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
if err != nil {
return "", fmt.Errorf("failed to connect to repo server: %w", err)
}
defer utilio.Close(conn)
repo, err := m.db.GetRepository(context.Background(), repoURL, "")
if err != nil {
return "", fmt.Errorf("failed to get repo %q: %w", repoURL, err)
// evaluateRevisionChanges determines if a source revision has changes compared to the synced revision.
// Returns the resolved revision, whether changes were detected, and any error.
func (m *appStateManager) evaluateRevisionChanges(
ctx context.Context,
repoClient apiclient.RepoServerServiceClient,
app *v1alpha1.Application,
source *v1alpha1.ApplicationSource,
sourceIndex int,
repo *v1alpha1.Repository,
revision string,
refSources map[string]*v1alpha1.RefTarget,
syncedRefSources v1alpha1.RefTargetRevisionMapping,
noRevisionCache bool,
appLabelKey string,
serverVersion string,
apiVersions []string,
trackingMethod string,
installationID string,
keyManifestGenerateAnnotationExists bool,
keyManifestGenerateAnnotationVal string,
) (string, bool, error) {
// For ref source specifically, we always return false since their change are evaluated as part of the source
// referencing them.
if source.IsRef() {
return revision, false, nil
}
// Mock the app. The repo-server only needs to know whether the "chart" field is populated.
app := &v1alpha1.Application{
Spec: v1alpha1.ApplicationSpec{
Source: &v1alpha1.ApplicationSource{
RepoURL: repoURL,
TargetRevision: revision,
},
},
// Determine the synced revision and source type for this specific source
var syncedRevision string
if app.Spec.HasMultipleSources() {
if sourceIndex < len(app.Status.Sync.Revisions) {
syncedRevision = app.Status.Sync.Revisions[sourceIndex]
}
} else {
syncedRevision = app.Status.Sync.Revision
}
resp, err := repoClient.ResolveRevision(context.Background(), &apiclient.ResolveRevisionRequest{
Repo: repo,
App: app,
AmbiguousRevision: revision,
})
if err != nil {
return "", fmt.Errorf("failed to determine whether the dry source has changed: %w", err)
// if revisions are the same (and we are not using reference sources), we know there is no changes
if syncedRevision == revision && revision != "" && len(refSources) == 0 {
return revision, false, nil
}
return resp.Revision, nil
appNamespace := app.Spec.Destination.Namespace
if repo.Depth == 0 && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
Repo: repo,
Revision: revision,
SyncedRevision: syncedRevision,
NoRevisionCache: noRevisionCache,
Paths: path.GetSourceRefreshPaths(app, *source),
AppLabelKey: appLabelKey,
AppName: app.InstanceName(m.namespace),
Namespace: appNamespace,
ApplicationSource: source,
KubeVersion: serverVersion,
ApiVersions: apiVersions,
TrackingMethod: trackingMethod,
RefSources: refSources,
SyncedRefSources: syncedRefSources,
HasMultipleSources: app.Spec.HasMultipleSources(),
InstallationID: installationID,
})
if err != nil {
return "", false, err
}
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
if updateRevisionResult.Revision != "" {
revision = updateRevisionResult.Revision
}
return revision, updateRevisionResult.Changes, nil
}
// revisionsMayHaveChanges is set to true if at least one revision is not possible to be updated
return revision, true, nil
}
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
@ -543,10 +546,32 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
return ns != nil && ns.GetKind() == kubeutil.NamespaceKind && ns.GetName() == app.Spec.Destination.Namespace && app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.ManagedNamespaceMetadata != nil
}
// partitionTargetObjsForSync returns the manifest subset passed to gitops-engine sync, and whether
// the full manifest set declared PreDelete and/or PostDelete hooks (for finalizer handling).
// Uses isPreDeleteHook / isPostDeleteHook / hasGitOpsEngineSyncPhaseHook from hook.go.
func partitionTargetObjsForSync(targetObjs []*unstructured.Unstructured) (syncObjs []*unstructured.Unstructured, hasPreDeleteHooks, hasPostDeleteHooks bool) {
for _, obj := range targetObjs {
if isPreDeleteHook(obj) {
hasPreDeleteHooks = true
if !hasGitOpsEngineSyncPhaseHook(obj) {
continue
}
}
if isPostDeleteHook(obj) {
hasPostDeleteHooks = true
if !hasGitOpsEngineSyncPhaseHook(obj) {
continue
}
}
syncObjs = append(syncObjs, obj)
}
return syncObjs, hasPreDeleteHooks, hasPostDeleteHooks
}
// CompareAppState compares application git state to the live app state, using the specified
// revision and supplied source. If revision or overrides are empty, then compares against
// revision and overrides in the app spec.
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
ts := stats.NewTimingStats()
logCtx := log.WithFields(applog.GetAppLogFields(app))
@ -770,24 +795,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
}
}
}
hasPreDeleteHooks := false
hasPostDeleteHooks := false
// Filter out PreDelete and PostDelete hooks from targetObjs since they should not be synced
// as regular resources. They are only executed during deletion.
var targetObjsForSync []*unstructured.Unstructured
for _, obj := range targetObjs {
if isPreDeleteHook(obj) {
hasPreDeleteHooks = true
// Skip PreDelete hooks - they are not synced, only executed during deletion
continue
}
if isPostDeleteHook(obj) {
hasPostDeleteHooks = true
// Skip PostDelete hooks - they are not synced, only executed after deletion
continue
}
targetObjsForSync = append(targetObjsForSync, obj)
}
targetObjsForSync, hasPreDeleteHooks, hasPostDeleteHooks := partitionTargetObjsForSync(targetObjs)
reconciliation := sync.Reconcile(targetObjsForSync, liveObjByKey, app.Spec.Destination.Namespace, infoProvider)
ts.AddCheckpoint("live_ms")
@ -842,9 +850,10 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
if err != nil {
log.Errorf("CompareAppState error getting server side diff dry run applier: %s", err)
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionUnknownError, Message: err.Error(), LastTransitionTime: &now})
} else {
defer cleanup()
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
}
defer cleanup()
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
}
// enable structured merge diff if application syncs with server-side apply

View file

@ -1,6 +1,7 @@
package controller
import (
"context"
"encoding/json"
"errors"
"os"
@ -31,6 +32,7 @@ import (
"github.com/argoproj/argo-cd/v3/controller/testdata"
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
"github.com/argoproj/argo-cd/v3/reposerver/apiclient/mocks"
"github.com/argoproj/argo-cd/v3/test"
)
@ -2040,6 +2042,61 @@ func TestCompareAppState_CallUpdateRevisionForPaths_ForMultiSource(t *testing.T)
require.False(t, revisionsMayHaveChanges)
}
func Test_GetRepoObjs_HydrateToAppPathNotExist(t *testing.T) {
t.Parallel()
t.Run("with hydrateTo: appends waiting message", func(t *testing.T) {
t.Parallel()
app := newFakeApp()
app.Spec.Source = nil
app.Spec.SourceHydrator = &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/example/repo",
TargetRevision: "main",
Path: "apps/my-app",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "env/prod",
Path: "env/prod/my-app",
},
HydrateTo: &v1alpha1.HydrateTo{
TargetBranch: "env/prod-next",
},
}
ctrl := newFakeController(t.Context(), &fakeData{manifestResponse: &apiclient.ManifestResponse{}}, errors.New("env/prod/my-app: app path does not exist"))
source := app.Spec.GetSource()
_, _, _, err := ctrl.appStateManager.GetRepoObjs(t.Context(), app, []v1alpha1.ApplicationSource{source}, "app", []string{""}, true, false, false, &defaultProj, false)
require.ErrorContains(t, err, "app path does not exist")
require.ErrorContains(t, err, "waiting for an external process to update env/prod from env/prod-next")
})
t.Run("without hydrateTo: no waiting message appended", func(t *testing.T) {
t.Parallel()
app := newFakeApp()
app.Spec.Source = nil
app.Spec.SourceHydrator = &v1alpha1.SourceHydrator{
DrySource: v1alpha1.DrySource{
RepoURL: "https://github.com/example/repo",
TargetRevision: "main",
Path: "apps/my-app",
},
SyncSource: v1alpha1.SyncSource{
TargetBranch: "env/prod",
Path: "env/prod/my-app",
},
}
ctrl := newFakeController(t.Context(), &fakeData{manifestResponse: &apiclient.ManifestResponse{}}, errors.New("env/prod/my-app: app path does not exist"))
source := app.Spec.GetSource()
_, _, _, err := ctrl.appStateManager.GetRepoObjs(t.Context(), app, []v1alpha1.ApplicationSource{source}, "app", []string{""}, true, false, false, &defaultProj, false)
require.ErrorContains(t, err, "app path does not exist")
require.NotContains(t, err.Error(), "waiting for an external process")
})
}
func Test_isObjRequiresDeletionConfirmation(t *testing.T) {
for _, tt := range []struct {
name string
@ -2108,3 +2165,190 @@ func Test_isObjRequiresDeletionConfirmation(t *testing.T) {
})
}
}
func Test_evaluateRevisionChanges(t *testing.T) {
tests := []struct {
name string
source *v1alpha1.ApplicationSource
sourceType v1alpha1.ApplicationSourceType
syncPolicy *v1alpha1.SyncPolicy
revision string
appSyncedRevision string
refSources map[string]*v1alpha1.RefTarget
repoDepth int64
keyManifestGenerateAnnotationExists bool
keyManifestGenerateAnnotationVal string
updateRevisionForPathsResponse *apiclient.UpdateRevisionForPathsResponse
expectedRevision string
expectedHasChanges bool
expectUpdateRevisionForPathsCalled bool
}{
{
name: "Ref source returns early with no changes",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Ref: "main",
},
sourceType: v1alpha1.ApplicationSourceTypeHelm,
revision: "abc123",
appSyncedRevision: "def456",
expectedRevision: "abc123",
expectedHasChanges: false,
},
{
name: "Same revision with no ref sources returns early",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Path: "manifests",
},
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
revision: "abc123",
appSyncedRevision: "abc123",
refSources: map[string]*v1alpha1.RefTarget{},
expectedRevision: "abc123",
expectedHasChanges: false,
},
{
name: "Same revision with ref sources continues to evaluation",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Path: "manifests",
},
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
revision: "abc123",
appSyncedRevision: "abc123",
refSources: map[string]*v1alpha1.RefTarget{
"ref1": {Repo: v1alpha1.Repository{Repo: "https://github.com/example/ref"}},
},
repoDepth: 0,
keyManifestGenerateAnnotationExists: true,
keyManifestGenerateAnnotationVal: ".",
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{
Revision: "abc123",
Changes: false,
},
expectedRevision: "abc123",
expectedHasChanges: false,
expectUpdateRevisionForPathsCalled: true,
},
{
name: "Shallow clone skips UpdateRevisionForPaths",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Path: "manifests",
},
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
syncPolicy: &v1alpha1.SyncPolicy{
Automated: &v1alpha1.SyncPolicyAutomated{},
},
revision: "abc123",
appSyncedRevision: "def456",
repoDepth: 1,
keyManifestGenerateAnnotationExists: true,
keyManifestGenerateAnnotationVal: ".",
expectedRevision: "abc123",
expectedHasChanges: true,
expectUpdateRevisionForPathsCalled: false,
},
{
name: "Missing annotation skips UpdateRevisionForPaths",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Path: "manifests",
},
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
syncPolicy: &v1alpha1.SyncPolicy{
Automated: &v1alpha1.SyncPolicyAutomated{},
},
revision: "abc123",
appSyncedRevision: "def456",
repoDepth: 0,
keyManifestGenerateAnnotationExists: false,
keyManifestGenerateAnnotationVal: "",
expectedRevision: "abc123",
expectedHasChanges: true,
expectUpdateRevisionForPathsCalled: false,
},
{
name: "UpdateRevisionForPaths returns updated revision",
source: &v1alpha1.ApplicationSource{
RepoURL: "https://github.com/example/repo",
Path: "manifests",
},
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
syncPolicy: &v1alpha1.SyncPolicy{
Automated: &v1alpha1.SyncPolicyAutomated{},
},
revision: "HEAD",
appSyncedRevision: "def456",
repoDepth: 0,
keyManifestGenerateAnnotationExists: true,
keyManifestGenerateAnnotationVal: ".",
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{
Revision: "abc123resolved",
Changes: true,
},
expectedRevision: "abc123resolved",
expectedHasChanges: true,
expectUpdateRevisionForPathsCalled: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
app := newFakeApp()
app.Spec.SyncPolicy = tt.syncPolicy
app.Status.Sync.Revision = tt.appSyncedRevision
app.Status.SourceType = tt.sourceType
if tt.keyManifestGenerateAnnotationExists {
app.Annotations = map[string]string{
v1alpha1.AnnotationKeyManifestGeneratePaths: tt.keyManifestGenerateAnnotationVal,
}
}
repo := &v1alpha1.Repository{
Repo: tt.source.RepoURL,
Depth: tt.repoDepth,
}
mockRepoClient := &mocks.RepoServerServiceClient{}
if tt.expectUpdateRevisionForPathsCalled {
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(tt.updateRevisionForPathsResponse, nil)
}
mgr := &appStateManager{
namespace: "test-namespace",
}
resolvedRevision, hasChanges, err := mgr.evaluateRevisionChanges(
context.Background(),
mockRepoClient,
app,
tt.source,
0, // sourceIndex
repo,
tt.revision,
tt.refSources,
nil,
false,
"app.kubernetes.io/instance",
"v1.28.0",
[]string{"v1"},
"label",
"test-installation",
tt.keyManifestGenerateAnnotationExists,
tt.keyManifestGenerateAnnotationVal,
)
require.NoError(t, err)
assert.Equal(t, tt.expectedRevision, resolvedRevision)
assert.Equal(t, tt.expectedHasChanges, hasChanges)
if tt.expectUpdateRevisionForPathsCalled {
mockRepoClient.AssertExpectations(t)
} else {
mockRepoClient.AssertNotCalled(t, "UpdateRevisionForPaths")
}
})
}
}

View file

@ -6,7 +6,6 @@ import (
"fmt"
"os"
"strconv"
"strings"
"time"
"k8s.io/apimachinery/pkg/util/strategicpatch"
@ -33,20 +32,16 @@ import (
applog "github.com/argoproj/argo-cd/v3/util/app/log"
"github.com/argoproj/argo-cd/v3/util/argo"
"github.com/argoproj/argo-cd/v3/util/argo/diff"
"github.com/argoproj/argo-cd/v3/util/glob"
kubeutil "github.com/argoproj/argo-cd/v3/util/kube"
logutils "github.com/argoproj/argo-cd/v3/util/log"
"github.com/argoproj/argo-cd/v3/util/lua"
"github.com/argoproj/argo-cd/v3/util/settings"
)
const (
// EnvVarSyncWaveDelay is an environment variable which controls the delay in seconds between
// each sync-wave
EnvVarSyncWaveDelay = "ARGOCD_SYNC_WAVE_DELAY"
// serviceAccountDisallowedCharSet contains the characters that are not allowed to be present
// in a DefaultServiceAccount configured for a DestinationServiceAccount
serviceAccountDisallowedCharSet = "!*[]{}\\/"
)
func (m *appStateManager) getOpenAPISchema(server *v1alpha1.Cluster) (openapi.Resources, error) {
@ -288,7 +283,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
return
}
if impersonationEnabled {
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(project, app, destCluster)
serviceAccountToImpersonate, err := settings.DeriveServiceAccountToImpersonate(project, app, destCluster)
if err != nil {
state.Phase = common.OperationError
state.Message = fmt.Sprintf("failed to find a matching service account to impersonate: %v", err)
@ -308,22 +303,9 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), project)
})
if err != nil {
return err
}
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
}
}
return nil
return validateSyncPermissions(project, destCluster, func(proj string) ([]*v1alpha1.Cluster, error) {
return m.db.GetProjectClusters(context.TODO(), proj)
}, un, res)
}),
sync.WithOperationSettings(syncOp.DryRun, syncOp.Prune, syncOp.SyncStrategy.Force(), syncOp.IsApplyStrategy() || len(syncOp.Resources) > 0),
sync.WithInitialState(state.Phase, state.Message, initialResourcesRes, state.StartedAt),
@ -560,10 +542,15 @@ func delayBetweenSyncWaves(_ common.SyncPhase, _ int, finalWave bool) error {
func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject) (bool, error) {
window := proj.Spec.SyncWindows.Matches(app)
isManual := false
var operationStartTime *time.Time
if app.Status.OperationState != nil {
isManual = !app.Status.OperationState.Operation.InitiatedBy.Automated
if !app.Status.OperationState.StartedAt.IsZero() {
t := app.Status.OperationState.StartedAt.Time
operationStartTime = &t
}
}
canSync, err := window.CanSync(isManual)
canSync, err := window.CanSync(isManual, operationStartTime)
if err != nil {
// prevents sync because sync window has an error
return true, err
@ -571,37 +558,32 @@ func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject
return !canSync, nil
}
// deriveServiceAccountToImpersonate determines the service account to be used for impersonation for the sync operation.
// The returned service account will be fully qualified including namespace and the service account name in the format system:serviceaccount:<namespace>:<service_account>
func deriveServiceAccountToImpersonate(project *v1alpha1.AppProject, application *v1alpha1.Application, destCluster *v1alpha1.Cluster) (string, error) {
// spec.Destination.Namespace is optional. If not specified, use the Application's
// namespace
serviceAccountNamespace := application.Spec.Destination.Namespace
if serviceAccountNamespace == "" {
serviceAccountNamespace = application.Namespace
// validateSyncPermissions checks whether the given resource is permitted by the project's
// allow/deny lists and destination rules. It returns an error if the API resource info is nil
// (preventing a nil-pointer panic), if the resource's group/kind is not permitted, or if
// the resource's namespace is not an allowed destination.
func validateSyncPermissions(
project *v1alpha1.AppProject,
destCluster *v1alpha1.Cluster,
getProjectClusters func(string) ([]*v1alpha1.Cluster, error),
un *unstructured.Unstructured,
res *metav1.APIResource,
) error {
if res == nil {
return fmt.Errorf("failed to get API resource info for %s/%s: unable to verify permissions", un.GroupVersionKind().Group, un.GroupVersionKind().Kind)
}
// Loop through the destinationServiceAccounts and see if there is any destination that is a candidate.
// if so, return the service account specified for that destination.
for _, item := range project.Spec.DestinationServiceAccounts {
dstServerMatched, err := glob.MatchWithError(item.Server, destCluster.Server)
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
}
if res.Namespaced {
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), getProjectClusters)
if err != nil {
return "", fmt.Errorf("invalid glob pattern for destination server: %w", err)
return err
}
dstNamespaceMatched, err := glob.MatchWithError(item.Namespace, application.Spec.Destination.Namespace)
if err != nil {
return "", fmt.Errorf("invalid glob pattern for destination namespace: %w", err)
}
if dstServerMatched && dstNamespaceMatched {
if strings.Trim(item.DefaultServiceAccount, " ") == "" || strings.ContainsAny(item.DefaultServiceAccount, serviceAccountDisallowedCharSet) {
return "", fmt.Errorf("default service account contains invalid chars '%s'", item.DefaultServiceAccount)
} else if strings.Contains(item.DefaultServiceAccount, ":") {
// service account is specified along with its namespace.
return "system:serviceaccount:" + item.DefaultServiceAccount, nil
}
// service account needs to be prefixed with a namespace
return fmt.Sprintf("system:serviceaccount:%s:%s", serviceAccountNamespace, item.DefaultServiceAccount), nil
if !permitted {
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
}
}
// if there is no match found in the AppProject.Spec.DestinationServiceAccounts, use the default service account of the destination namespace.
return "", fmt.Errorf("no matching service account found for destination server %s and namespace %s", application.Spec.Destination.Server, serviceAccountNamespace)
return nil
}

View file

@ -13,6 +13,7 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
"github.com/argoproj/argo-cd/v3/common"
"github.com/argoproj/argo-cd/v3/controller/testdata"
@ -21,6 +22,7 @@ import (
"github.com/argoproj/argo-cd/v3/test"
"github.com/argoproj/argo-cd/v3/util/argo/diff"
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
"github.com/argoproj/argo-cd/v3/util/settings"
)
func TestPersistRevisionHistory(t *testing.T) {
@ -725,7 +727,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should be an error saying no valid match was found
@ -749,7 +751,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
@ -788,7 +790,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and should use the right service account for impersonation
require.NoError(t, err)
@ -827,7 +829,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be no error and it should use the first matching service account for impersonation
require.NoError(t, err)
@ -861,7 +863,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and should use the first matching glob pattern service account for impersonation
require.NoError(t, err)
@ -896,7 +898,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should be an error saying no match was found
require.EqualError(t, err, expectedErrMsg)
@ -924,7 +926,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the service account configured for with empty namespace should be used.
require.NoError(t, err)
@ -958,7 +960,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the catch all service account should be returned
require.NoError(t, err)
@ -982,7 +984,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination namespace")
@ -1016,7 +1018,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@ -1044,7 +1046,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
f.application.Spec.Destination.Name = f.cluster.Name
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@ -1127,7 +1129,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the right service account must be returned.
require.NoError(t, err)
@ -1166,7 +1168,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and first matching service account should be used
require.NoError(t, err)
@ -1200,7 +1202,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account of the glob pattern, being the first match should be returned.
@ -1235,7 +1237,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
// then, there an error with appropriate message must be returned
require.EqualError(t, err, expectedErr)
@ -1269,7 +1271,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there should not be any error and the service account of the glob pattern match must be returned.
require.NoError(t, err)
@ -1293,7 +1295,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
// then, there must be an error as the glob pattern is invalid.
require.ErrorContains(t, err, "invalid glob pattern for destination server")
@ -1327,7 +1329,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
// then, there should not be any error and the service account with the given namespace prefix must be returned.
require.NoError(t, err)
@ -1355,7 +1357,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
f.application.Spec.Destination.Name = f.cluster.Name
// when
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
assert.Equal(t, expectedSA, sa)
// then, there should not be any error and the service account with its namespace should be returned.
@ -1653,3 +1655,116 @@ func dig(obj any, path ...any) any {
return i
}
func TestValidateSyncPermissions(t *testing.T) {
t.Parallel()
newResource := func(group, kind, name, namespace string) *unstructured.Unstructured {
obj := &unstructured.Unstructured{}
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: group, Version: "v1", Kind: kind})
obj.SetName(name)
obj.SetNamespace(namespace)
return obj
}
project := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
},
}
destCluster := &v1alpha1.Cluster{
Server: "https://kubernetes.default.svc",
}
noopGetClusters := func(_ string) ([]*v1alpha1.Cluster, error) {
return nil, nil
}
t.Run("nil APIResource returns error", func(t *testing.T) {
t.Parallel()
un := newResource("apps", "Deployment", "my-deploy", "default")
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, nil)
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to get API resource info for apps/Deployment")
assert.Contains(t, err.Error(), "unable to verify permissions")
})
t.Run("permitted namespaced resource returns no error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "default")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
t.Run("group kind not permitted returns error", func(t *testing.T) {
t.Parallel()
projectWithDenyList := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "restricted-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "*", Server: "*"},
},
ClusterResourceBlacklist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
},
},
}
un := newResource("rbac.authorization.k8s.io", "ClusterRole", "my-role", "")
res := &metav1.APIResource{Name: "clusterroles", Namespaced: false}
err := validateSyncPermissions(projectWithDenyList, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "is not permitted in project")
})
t.Run("namespace not permitted returns error", func(t *testing.T) {
t.Parallel()
un := newResource("", "ConfigMap", "my-cm", "kube-system")
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
require.Error(t, err)
assert.Contains(t, err.Error(), "namespace kube-system is not permitted in project")
})
t.Run("cluster-scoped resource skips namespace check", func(t *testing.T) {
t.Parallel()
projectWithClusterResources := &v1alpha1.AppProject{
ObjectMeta: metav1.ObjectMeta{
Name: "test-project",
Namespace: "argocd",
},
Spec: v1alpha1.AppProjectSpec{
Destinations: []v1alpha1.ApplicationDestination{
{Namespace: "default", Server: "*"},
},
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
{Group: "*", Kind: "*"},
},
},
}
un := newResource("", "Namespace", "my-ns", "")
res := &metav1.APIResource{Name: "namespaces", Namespaced: false}
err := validateSyncPermissions(projectWithClusterResources, destCluster, noopGetClusters, un, res)
assert.NoError(t, err)
})
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3 MiB

After

Width:  |  Height:  |  Size: 23 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

View file

@ -38,23 +38,23 @@ and others. Although you can make changes to these files and run them locally, i
1. Fork and clone the [Argo UI repository](https://github.com/argoproj/argo-ui).
2. `cd` into your `argo-ui` directory, and then run `yarn install`.
2. `cd` into your `argo-ui` directory, and then run `pnpm install`.
3. Make your file changes.
4. Run `yarn start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
4. Run `pnpm start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
5. Use [yarn link](https://classic.yarnpkg.com/en/docs/cli/link/) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
5. Use [pnpm link](https://pnpm.io/cli/link) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
* `cd argo-ui`
* `yarn link`
* `pnpm link`
* `cd ../argo-cd/ui`
* `yarn link argo-ui`
* `pnpm link argo-ui`
Once the `argo-ui` package has been successfully linked, test changes in your local development environment.
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `yarn add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the latest master commit for argo-ui.
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `pnpm add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/pnpm-lock.yaml` file to use the latest master commit for argo-ui.
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.
8. Submit changes to `ui/pnpm-lock.yaml` in a PR to Argo CD.

View file

@ -23,12 +23,37 @@ All following commands in this guide assume the namespace is already set.
kubectl config set-context --current --namespace=argocd
```
### Pull in all build dependencies
### Pull in all UI build dependencies
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required dependencies, issue:
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required UI dependencies (NPM packages), issue:
* `make dep-ui` or `make dep-ui-local`
These commands run `pnpm install --frozen-lockfile` command, which only brings package versions that are defined in the `pnpm-lock.yaml` file without trying to resolve and download new package versions.
### Updating UI build dependencies
If you need to add new UI dependencies or update existing ones you need
to run a `pnpm` command in the ./ui directory to resolve and download new packages.
You can run it in the docker container using the `make run-pnpm` make target.
For example, to add new dependency `newpackage` you may run command like
```shell
make run-pnpm PNPM_COMMAND="add newpackage --ignore-scripts"
```
To upgrade an existing package:
```shell
make run-pnpm PNPM_COMMAND="update existingpackage@1.0.2 --ignore-scripts"
```
Please consider using best security practices when adding or upgrading
NPM dependencies, such as this
[guide](https://github.com/lirantal/npm-security-best-practices/blob/main/README.md).
### Generate API glue code and other assets
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
@ -60,7 +85,7 @@ The Linter might make some automatic changes to your code, such as indentation f
* Finally, after the Linter reports no errors, run `git status` or `git diff` to check for any changes made automatically by Lint
* If there were automatic changes, commit them to your local branch
If you touched UI code, you should also run the Yarn linter on it:
If you touched UI code, you should also run the linter on it:
* Run `make lint-ui` or `make lint-ui-local`
* Fix any of the errors reported by it

View file

@ -21,8 +21,8 @@ These are the upcoming releases dates:
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
| v3.4 | Monday, Mar. 16, 2026 | Tuesday, May. 5, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
| v3.5 | Tuesday, Jun. 16, 2026 | Tuesday, Aug. 4, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
Actual release dates might differ from the plan by a few days.
@ -36,10 +36,10 @@ effectively means that there is a seven-week feature freeze.
These are the approximate release dates:
* The first Monday of February
* The first Monday of May
* The first Monday of August
* The first Monday of November
* The first Tuesday of February
* The first Tuesday of May
* The first Tuesday of August
* The first Tuesday of November
Dates may be shifted slightly to accommodate holidays. Those shifts should be minimal.
@ -86,6 +86,7 @@ CVEs in Argo CD code will be patched for all supported versions. Read more about
Dependencies are evaluated before being introduced to ensure they:
1) are actively maintained
2) are maintained by trustworthy maintainers
These evaluations vary from dependency to dependencies.

View file

@ -98,11 +98,15 @@ checks to see if the release came out correctly:
### If something went wrong
If something went wrong, damage should be limited. Depending on the steps that
have been performed, you will need to manually clean up.
A new Argo CD release results in:
- A new GitHub release created
- Stable Git tag pointing to the release (if the release is the latest release)
- The release Go packages are published for using Argo CD code as dependency
- Docker images and SBOM artifacts are published
* If the container image has been pushed to Quay.io, delete it
* Delete the release (if created) from the `Releases` page on GitHub
Because of all the above dependencies, in a case of a release that failed, it is not safe to delete and recreate it.
Instead, create the next patch release (for example, if `3.2.4` failed, create `3.2.5` after fixing the problem, but don't recreate `3.2.4`).
Upon successful publishing of the fixed release (3.2.5 in our example), copy the full release notes manually from the failed release (3.2.4 in our example) and then update the failed release (3.2.4 in our example) release notes to state this release is invalid and should not be used.
### Manual releasing

View file

@ -212,7 +212,7 @@ export IMAGE_TAG=1.5.0-myrc
> [!NOTE]
> The image will be built for `linux/amd64` platform by default. If you are running on Mac with Apple chip (ARM),
> you need to specify the correct buld platform by running:
> you need to specify the correct build platform by running:
> ```bash
> export TARGET_ARCH=linux/arm64
> ```

View file

@ -1,7 +1,8 @@
# Submitting PRs
## Prerequisites
1. [Development Environment](development-environment.md)
1. [Development Environment](development-environment.md)
2. [Toolchain Guide](toolchain-guide.md)
3. [Development Cycle](development-cycle.md)
@ -10,7 +11,7 @@
> [!NOTE]
> **Before you start**
>
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the codebase.
>
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
@ -21,10 +22,10 @@ If you need guidance with submitting a PR, or have any other questions regarding
## Before Submitting a PR
1. Rebase your branch against upstream main:
1. Rebase your branch against upstream master:
```shell
git fetch upstream
git rebase upstream/main
git rebase upstream/master
```
2. Run pre-commit checks:
@ -39,9 +40,9 @@ When you submit a PR against Argo CD's GitHub repository, a couple of CI checks
> [!NOTE]
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer than expected.
The following read will help you to submit a PR that meets the standards of our CI tests:
The following guide will help you to submit a PR that meets the standards of our CI tests:
## Title of the PR
@ -56,6 +57,7 @@ We use [PR title checker](https://github.com/marketplace/actions/pr-title-checke
* `docs` - Your PR improves the documentation
* `chore` - Your PR improves any internals of Argo CD, such as the build process, unit tests, etc
* `refactor` - Your PR refactors the code base, without adding new features or fixing bugs
* `revert` - Your PR reverts a previous commit
Please prefix the title of your PR with one of the valid categories. For example, if you chose the title your PR `Add documentation for GitHub SSO integration`, please use `docs: Add documentation for GitHub SSO integration` instead.

View file

@ -45,7 +45,7 @@ The Makefile's `start-e2e` target starts instances of ArgoCD on your local machi
- `ARGOCD_E2E_REPOSERVER_PORT`: Listener port for `argocd-reposerver` (default: `8081`)
- `ARGOCD_E2E_DEX_PORT`: Listener port for `dex` (default: `5556`)
- `ARGOCD_E2E_REDIS_PORT`: Listener port for `redis` (default: `6379`)
- `ARGOCD_E2E_YARN_CMD`: Command to use for starting the UI via Yarn (default: `yarn`)
- `ARGOCD_E2E_PNPM_CMD`: Command to use for starting the UI via pnpm (default: `pnpm`)
- `ARGOCD_E2E_DIR`: Local path to the repository to use for ephemeral test data
If you have changed the port for `argocd-server`, be sure to also set `ARGOCD_SERVER` environment variable to point to that port, e.g. `export ARGOCD_SERVER=localhost:8888` before running `make test-e2e` so that the test will communicate to the correct server component.

Some files were not shown because too many files have changed in this diff Show more