mirror of
https://github.com/argoproj/argo-cd
synced 2026-04-21 17:07:16 +00:00
Compare commits
245 commits
v3.4.0-rc4
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6256abf182 | ||
|
|
b01aa188fd | ||
|
|
a7853eb7b6 | ||
|
|
e0e827dab0 | ||
|
|
74d1fe0a13 | ||
|
|
b74c08ec5c | ||
|
|
032d9e1e80 | ||
|
|
b37d389f62 | ||
|
|
26f71b3159 | ||
|
|
99b10b5e29 | ||
|
|
f54cc0bc61 | ||
|
|
5103112b9a | ||
|
|
5a4a551478 | ||
|
|
611fcb012c | ||
|
|
9c8ae9a294 | ||
|
|
68505a81ed | ||
|
|
3132b0de4f | ||
|
|
25b3037485 | ||
|
|
c3af4251d8 | ||
|
|
7f3ecfcf42 | ||
|
|
8038e0ec96 | ||
|
|
6c1fd67558 | ||
|
|
d017512baa | ||
|
|
29fd8db39a | ||
|
|
37e10dba75 | ||
|
|
f3b803f284 | ||
|
|
4bc5d38634 | ||
|
|
4f47dd0afa | ||
|
|
21615be541 | ||
|
|
8fbb72d1eb | ||
|
|
87d79f9392 | ||
|
|
4d2b6fa940 | ||
|
|
dce3f6e8a5 | ||
|
|
9a19735918 | ||
|
|
6bf97ec1fd | ||
|
|
e6aa9059dd | ||
|
|
4f8f4d2e21 | ||
|
|
2ccc2ea466 | ||
|
|
19219e06d2 | ||
|
|
db7d672f05 | ||
|
|
04fa70c4a4 | ||
|
|
3eb5104750 | ||
|
|
1a195cc04f | ||
|
|
576002fb72 | ||
|
|
a216fdb8f4 | ||
|
|
9cfce1df0e | ||
|
|
30efe53bf2 | ||
|
|
706a0370c2 | ||
|
|
ecc178f03e | ||
|
|
0a0cd0b687 | ||
|
|
ea3dae667e | ||
|
|
bfc332d871 | ||
|
|
67de02b1c4 | ||
|
|
a1af401f5f | ||
|
|
3ce32a9880 | ||
|
|
1bd0d48c82 | ||
|
|
6ba0727217 | ||
|
|
0c01fc895e | ||
|
|
7308ed98af | ||
|
|
1a0f5d4ef2 | ||
|
|
7445f7ed73 | ||
|
|
daadf868db | ||
|
|
7accd34f64 | ||
|
|
0737418abb | ||
|
|
0dd5e08d64 | ||
|
|
d65af147d2 | ||
|
|
c9b2e4b359 | ||
|
|
579fbab195 | ||
|
|
c4f3e389a2 | ||
|
|
bd823728ac | ||
|
|
de9416137d | ||
|
|
73962555bb | ||
|
|
b2a8bc99e4 | ||
|
|
cde9db8b29 | ||
|
|
85913f797e | ||
|
|
25e0c38363 | ||
|
|
28e13c3ec3 | ||
|
|
9cfbeb72f0 | ||
|
|
62422a9c30 | ||
|
|
c90b922522 | ||
|
|
a98eba200e | ||
|
|
170b89fe7b | ||
|
|
1dd9075a72 | ||
|
|
38a3826df8 | ||
|
|
cd8a25c195 | ||
|
|
7b5b6a8744 | ||
|
|
3a6083cb2d | ||
|
|
fb82b16b2d | ||
|
|
ae10c0c6c3 | ||
|
|
9e80e058e7 | ||
|
|
4220eddbf3 | ||
|
|
422ef230fa | ||
|
|
1fde0d075f | ||
|
|
f86cd078fc | ||
|
|
c3b498c2ae | ||
|
|
ad310c2452 | ||
|
|
6743cdf9cc | ||
|
|
19983129f2 | ||
|
|
880433f03b | ||
|
|
34b38428e9 | ||
|
|
f5f3bf8a06 | ||
|
|
f1388674cc | ||
|
|
212f51d851 | ||
|
|
d3b06f113f | ||
|
|
86fcb1447f | ||
|
|
12b241a56e | ||
|
|
a2b91ce309 | ||
|
|
57dfe55e70 | ||
|
|
8c29202f1c | ||
|
|
8e2571fdca | ||
|
|
21b826e204 | ||
|
|
047c0ae734 | ||
|
|
4c42071c7b | ||
|
|
6d3e641cca | ||
|
|
884ba71afc | ||
|
|
48f18e2905 | ||
|
|
b8da88a288 | ||
|
|
7262e61704 | ||
|
|
364bd00647 | ||
|
|
9a05e0e7f3 | ||
|
|
71da5f64ba | ||
|
|
9f723393e8 | ||
|
|
ba4d2a2104 | ||
|
|
0e4f7c857d | ||
|
|
bb66ffe0fa | ||
|
|
45a32a5c32 | ||
|
|
f298f4500f | ||
|
|
86a245c8bc | ||
|
|
3c47518db4 | ||
|
|
5a11160e9c | ||
|
|
721a7e722e | ||
|
|
7af68d277f | ||
|
|
54f9cf08e4 | ||
|
|
719ac073d8 | ||
|
|
fc03869180 | ||
|
|
bb2cfd9553 | ||
|
|
43d94f2b55 | ||
|
|
321153a69e | ||
|
|
873f63aa0d | ||
|
|
b018313aec | ||
|
|
d449294f03 | ||
|
|
908ce7ee49 | ||
|
|
68cbd05e52 | ||
|
|
e21d471965 | ||
|
|
04e4e080df | ||
|
|
0c4946f12f | ||
|
|
88663928f6 | ||
|
|
5c03a8b37d | ||
|
|
490f02116c | ||
|
|
82789b7071 | ||
|
|
5fa0045311 | ||
|
|
44e08631f2 | ||
|
|
62670d6595 | ||
|
|
fabbbbe6ee | ||
|
|
3eebbcb33b | ||
|
|
4259f467b0 | ||
|
|
32f23a446f | ||
|
|
5101db5225 | ||
|
|
a5073f1ecc | ||
|
|
bd1cccfb9a | ||
|
|
0e729cce34 | ||
|
|
fb1b240c9e | ||
|
|
c52bf66380 | ||
|
|
e00345bff7 | ||
|
|
c3c12c1cad | ||
|
|
e96063557a | ||
|
|
bfe5cfb587 | ||
|
|
393152ddad | ||
|
|
1042e12c6a | ||
|
|
0191c1684d | ||
|
|
ab0070994b | ||
|
|
da7a61b75c | ||
|
|
a892317c67 | ||
|
|
303e001b8b | ||
|
|
d75a6b1523 | ||
|
|
1dc2ad04ff | ||
|
|
9ceaf0e8ee | ||
|
|
6a22728fd5 | ||
|
|
0c02de795e | ||
|
|
8e0b6e689a | ||
|
|
5aa83735f2 | ||
|
|
36f4ff7f35 | ||
|
|
99c51dfd2c | ||
|
|
a4c7f82c5b | ||
|
|
759e746e87 | ||
|
|
94d8ba92a8 | ||
|
|
b532528a0b | ||
|
|
8705f6965e | ||
|
|
4aeca2fbf8 | ||
|
|
2bbf91c0cf | ||
|
|
84442e03bc | ||
|
|
f97e2d2844 | ||
|
|
e972bfca78 | ||
|
|
1b405ce2b5 | ||
|
|
45b926d796 | ||
|
|
d4ec3282d4 | ||
|
|
4e3904a554 | ||
|
|
8981a5b855 | ||
|
|
ab27dd3ccf | ||
|
|
269e0b850b | ||
|
|
3f15cc6c9e | ||
|
|
25df43d7a0 | ||
|
|
6b35246605 | ||
|
|
bd7b16cbeb | ||
|
|
e1bb509264 | ||
|
|
3570031fa8 | ||
|
|
3eee5e3f52 | ||
|
|
77732d89b3 | ||
|
|
4aabf526c8 | ||
|
|
24c3abd8dd | ||
|
|
91d83d37c4 | ||
|
|
aabe8524ba | ||
|
|
fe30b2c60a | ||
|
|
148c86ad42 | ||
|
|
30db355197 | ||
|
|
442aed496f | ||
|
|
87ccebc51a | ||
|
|
20439902eb | ||
|
|
559da44135 | ||
|
|
a87aab146e | ||
|
|
d34e83f60c | ||
|
|
566c172058 | ||
|
|
d80a122502 | ||
|
|
539c35b295 | ||
|
|
45a84dfa38 | ||
|
|
d011b7b508 | ||
|
|
f1b922765d | ||
|
|
4b4bbc8bb2 | ||
|
|
c5d1c914bb | ||
|
|
59aea0476a | ||
|
|
4cdc650a58 | ||
|
|
2b6489828b | ||
|
|
92c3ef2559 | ||
|
|
4070b6feea | ||
|
|
67db597810 | ||
|
|
5b3073986f | ||
|
|
5ceb8354e6 | ||
|
|
79922c06d6 | ||
|
|
382c507beb | ||
|
|
8142920ab8 | ||
|
|
47a0746851 | ||
|
|
13cd517470 | ||
|
|
63a009effa | ||
|
|
5a6c83229b | ||
|
|
f409135f17 |
493 changed files with 992756 additions and 37970 deletions
3
.github/configs/renovate-config.js
vendored
3
.github/configs/renovate-config.js
vendored
|
|
@ -11,6 +11,7 @@ module.exports = {
|
|||
"github>argoproj/argo-cd//renovate-presets/custom-managers/yaml.json5",
|
||||
"github>argoproj/argo-cd//renovate-presets/fix/disable-all-updates.json5",
|
||||
"github>argoproj/argo-cd//renovate-presets/devtool.json5",
|
||||
"github>argoproj/argo-cd//renovate-presets/docs.json5"
|
||||
"github>argoproj/argo-cd//renovate-presets/docs.json5",
|
||||
"group:aws-sdk-go-v2Monorepo"
|
||||
]
|
||||
}
|
||||
3
.github/dependabot.yml
vendored
3
.github/dependabot.yml
vendored
|
|
@ -8,6 +8,9 @@ updates:
|
|||
ignore:
|
||||
- dependency-name: k8s.io/*
|
||||
groups:
|
||||
aws-sdk-v2:
|
||||
patterns:
|
||||
- "github.com/aws/aws-sdk-go-v2*"
|
||||
otel:
|
||||
patterns:
|
||||
- "go.opentelemetry.io/*"
|
||||
|
|
|
|||
2
.github/pr-title-checker-config.json
vendored
2
.github/pr-title-checker-config.json
vendored
|
|
@ -5,7 +5,7 @@
|
|||
},
|
||||
"CHECKS": {
|
||||
"prefixes": ["[Bot] docs: "],
|
||||
"regexp": "^(refactor|feat|fix|docs|test|ci|chore)!?(\\(.*\\))?!?:.*"
|
||||
"regexp": "^(refactor|feat|fix|docs|test|ci|chore|revert)!?(\\(.*\\))?!?:.*"
|
||||
},
|
||||
"MESSAGES": {
|
||||
"success": "PR title is valid",
|
||||
|
|
|
|||
14
.github/workflows/bump-major-version.yaml
vendored
14
.github/workflows/bump-major-version.yaml
vendored
|
|
@ -4,6 +4,10 @@ on:
|
|||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
permissions:
|
||||
|
|
@ -12,6 +16,12 @@ jobs:
|
|||
name: Automatically update major version
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
@ -37,7 +47,7 @@ jobs:
|
|||
working-directory: /home/runner/go/src/github.com/argoproj/argo-cd
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Add ~/go/bin to PATH
|
||||
|
|
@ -86,4 +96,4 @@ jobs:
|
|||
- [ ] Add an upgrade guide to the docs for this version
|
||||
branch: bump-major-version
|
||||
branch-suffix: random
|
||||
signoff: true
|
||||
signoff: true
|
||||
|
|
|
|||
24
.github/workflows/cherry-pick-single.yml
vendored
24
.github/workflows/cherry-pick-single.yml
vendored
|
|
@ -25,14 +25,24 @@ on:
|
|||
CHERRYPICK_APP_PRIVATE_KEY:
|
||||
required: true
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
cherry-pick:
|
||||
name: Cherry Pick to ${{ inputs.version_number }}
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Generate a token
|
||||
id: generate-token
|
||||
uses: actions/create-github-app-token@29824e69f54612133e76f7eaac726eef6c875baf # v2.2.1
|
||||
uses: actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3.1.1
|
||||
with:
|
||||
app-id: ${{ secrets.CHERRYPICK_APP_ID }}
|
||||
private-key: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
|
|
@ -66,6 +76,7 @@ jobs:
|
|||
|
||||
# Create new branch for cherry-pick
|
||||
CHERRY_PICK_BRANCH="cherry-pick-${{ inputs.pr_number }}-to-${TARGET_BRANCH}"
|
||||
|
||||
git checkout -b "$CHERRY_PICK_BRANCH" "origin/$TARGET_BRANCH"
|
||||
|
||||
# Perform cherry-pick
|
||||
|
|
@ -75,12 +86,17 @@ jobs:
|
|||
# Extract Signed-off-by from the cherry-pick commit
|
||||
SIGNOFF=$(git log -1 --pretty=format:"%B" | grep -E '^Signed-off-by:' || echo "")
|
||||
|
||||
# Push the new branch
|
||||
git push origin "$CHERRY_PICK_BRANCH"
|
||||
# Push the new branch. Force push to ensure that in case the original cherry-pick branch is stale,
|
||||
# that the current state of the $TARGET_BRANCH + cherry-pick gets in $CHERRY_PICK_BRANCH.
|
||||
git push origin -f "$CHERRY_PICK_BRANCH"
|
||||
|
||||
# Save data for PR creation
|
||||
echo "branch_name=$CHERRY_PICK_BRANCH" >> "$GITHUB_OUTPUT"
|
||||
echo "signoff=$SIGNOFF" >> "$GITHUB_OUTPUT"
|
||||
{
|
||||
echo "signoff<<EOF"
|
||||
echo "$SIGNOFF"
|
||||
echo "EOF"
|
||||
} >> "$GITHUB_OUTPUT"
|
||||
echo "target_branch=$TARGET_BRANCH" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "❌ Cherry-pick failed due to conflicts"
|
||||
|
|
|
|||
12
.github/workflows/cherry-pick.yml
vendored
12
.github/workflows/cherry-pick.yml
vendored
|
|
@ -6,6 +6,10 @@ on:
|
|||
- master
|
||||
types: ["labeled", "closed"]
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
find-labels:
|
||||
name: Find Cherry Pick Labels
|
||||
|
|
@ -18,6 +22,12 @@ jobs:
|
|||
outputs:
|
||||
labels: ${{ steps.extract-labels.outputs.labels }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Extract cherry-pick labels
|
||||
id: extract-labels
|
||||
run: |
|
||||
|
|
@ -50,4 +60,4 @@ jobs:
|
|||
pr_title: ${{ github.event.pull_request.title }}
|
||||
secrets:
|
||||
CHERRYPICK_APP_ID: ${{ vars.CHERRYPICK_APP_ID }}
|
||||
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
CHERRYPICK_APP_PRIVATE_KEY: ${{ secrets.CHERRYPICK_APP_PRIVATE_KEY }}
|
||||
|
|
|
|||
193
.github/workflows/ci-build.yaml
vendored
193
.github/workflows/ci-build.yaml
vendored
|
|
@ -14,7 +14,9 @@ on:
|
|||
env:
|
||||
# Golang version to use across CI steps
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.26.0'
|
||||
GOLANG_VERSION: '1.26.2'
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
|
@ -31,8 +33,13 @@ jobs:
|
|||
frontend: ${{ steps.filter.outputs.frontend_any_changed }}
|
||||
docs: ${{ steps.filter.outputs.docs_any_changed }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
|
||||
- uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
|
||||
id: filter
|
||||
with:
|
||||
# Any file which is not under docs/, ui/ or is not a markdown file is counted as a backend file
|
||||
|
|
@ -54,10 +61,15 @@ jobs:
|
|||
needs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Download all Go modules
|
||||
|
|
@ -74,18 +86,27 @@ jobs:
|
|||
needs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
- name: Restore go build and module cache
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
- name: Download all Go modules
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-build-v1-
|
||||
- name: Download Go modules
|
||||
run: |
|
||||
go mod download
|
||||
- name: Compile all packages
|
||||
|
|
@ -101,17 +122,22 @@ jobs:
|
|||
needs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@1e7e51e771db61008b38414a730f564565cf7c20 # v9.2.0
|
||||
with:
|
||||
# renovate: datasource=go packageName=github.com/golangci/golangci-lint/v2 versioning=regex:^v(?<major>\d+)\.(?<minor>\d+)\.(?<patch>\d+)?$
|
||||
version: v2.11.3
|
||||
version: v2.11.4
|
||||
args: --verbose
|
||||
|
||||
test-go:
|
||||
|
|
@ -125,6 +151,11 @@ jobs:
|
|||
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
|
|
@ -132,7 +163,7 @@ jobs:
|
|||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
|
|
@ -151,11 +182,15 @@ jobs:
|
|||
- name: Add /usr/local/bin to PATH
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
- name: Restore go build and module cache
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-build-v1-
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
|
|
@ -167,13 +202,13 @@ jobs:
|
|||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
git config --global user.email "john.doe@example.com"
|
||||
- name: Download and vendor all required packages
|
||||
- name: Download Go modules
|
||||
run: |
|
||||
go mod download
|
||||
- name: Run all unit tests
|
||||
run: make test-local
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: test-results
|
||||
path: test-results
|
||||
|
|
@ -189,6 +224,11 @@ jobs:
|
|||
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Create checkout directory
|
||||
run: mkdir -p ~/go/src/github.com/argoproj
|
||||
- name: Checkout code
|
||||
|
|
@ -196,7 +236,7 @@ jobs:
|
|||
- name: Create symlink in GOPATH
|
||||
run: ln -s $(pwd) ~/go/src/github.com/argoproj/argo-cd
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Install required packages
|
||||
|
|
@ -215,11 +255,15 @@ jobs:
|
|||
- name: Add /usr/local/bin to PATH
|
||||
run: |
|
||||
echo "/usr/local/bin" >> $GITHUB_PATH
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
- name: Restore go build and module cache
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-build-v1-
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
|
|
@ -231,13 +275,13 @@ jobs:
|
|||
run: |
|
||||
git config --global user.name "John Doe"
|
||||
git config --global user.email "john.doe@example.com"
|
||||
- name: Download and vendor all required packages
|
||||
- name: Download Go modules
|
||||
run: |
|
||||
go mod download
|
||||
- name: Run all unit tests
|
||||
run: make test-race-local
|
||||
- name: Generate test results artifacts
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: race-results
|
||||
path: test-results/
|
||||
|
|
@ -249,10 +293,15 @@ jobs:
|
|||
needs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Create symlink in GOPATH
|
||||
|
|
@ -306,26 +355,31 @@ jobs:
|
|||
needs:
|
||||
- changes
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Install pnpm
|
||||
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
|
||||
with:
|
||||
package_json_file: ui/package.json
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
|
||||
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
|
||||
with:
|
||||
# renovate: datasource=node-version packageName=node versioning=node
|
||||
node-version: '22.9.0'
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
node-version: '24.14.1'
|
||||
cache: 'pnpm'
|
||||
cache-dependency-path: '**/pnpm-lock.yaml'
|
||||
- name: Install node dependencies
|
||||
run: |
|
||||
cd ui && yarn install --frozen-lockfile --ignore-optional --non-interactive
|
||||
cd ui && pnpm i --frozen-lockfile
|
||||
- name: Build UI code
|
||||
run: |
|
||||
yarn test
|
||||
yarn build
|
||||
pnpm test
|
||||
pnpm build
|
||||
env:
|
||||
NODE_ENV: production
|
||||
NODE_ONLINE_ENV: online
|
||||
|
|
@ -334,7 +388,7 @@ jobs:
|
|||
CODECOV_TOKEN: ${{ github.ref == 'refs/heads/master' && secrets.CODECOV_TOKEN || '' }}
|
||||
working-directory: ui/
|
||||
- name: Run ESLint
|
||||
run: yarn lint
|
||||
run: pnpm lint
|
||||
working-directory: ui/
|
||||
|
||||
shellcheck:
|
||||
|
|
@ -359,19 +413,15 @@ jobs:
|
|||
sonar_secret: ${{ secrets.SONAR_TOKEN }}
|
||||
codecov_secret: ${{ secrets.CODECOV_TOKEN }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Restore node dependency cache
|
||||
id: cache-dependencies
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
path: ui/node_modules
|
||||
key: ${{ runner.os }}-node-dep-v2-${{ hashFiles('**/yarn.lock') }}
|
||||
- name: Remove other node_modules directory
|
||||
run: |
|
||||
rm -rf ui/node_modules/argo-ui/node_modules
|
||||
- name: Get e2e code coverage
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
|
|
@ -392,7 +442,7 @@ jobs:
|
|||
- name: Upload code coverage information to codecov.io
|
||||
# Only run when the workflow is for upstream (PR target or push is in argoproj/argo-cd).
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
|
||||
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
|
||||
with:
|
||||
files: test-results/full-coverage.out
|
||||
fail_ci_if_error: true
|
||||
|
|
@ -401,7 +451,7 @@ jobs:
|
|||
- name: Upload test results to Codecov
|
||||
# Codecov uploads test results to Codecov.io on upstream master branch.
|
||||
if: github.repository == 'argoproj/argo-cd' && github.ref == 'refs/heads/master' && github.event_name == 'push'
|
||||
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5.5.2
|
||||
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v6.0.0
|
||||
with:
|
||||
files: test-results/junit.xml
|
||||
fail_ci_if_error: true
|
||||
|
|
@ -411,7 +461,7 @@ jobs:
|
|||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
uses: SonarSource/sonarqube-scan-action@a31c9398be7ace6bbfaf30c0bd5d415f843d45e9 # v7.0.0
|
||||
uses: SonarSource/sonarqube-scan-action@299e4b793aaa83bf2aba7c9c14bedbb485688ec4 # v7.1.0
|
||||
if: env.sonar_secret != ''
|
||||
test-e2e:
|
||||
name: Run end-to-end tests
|
||||
|
|
@ -444,6 +494,11 @@ jobs:
|
|||
GITHUB_TOKEN: ${{ secrets.E2E_TEST_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITLAB_TOKEN: ${{ secrets.E2E_TEST_GITLAB_TOKEN }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Free Disk Space (Ubuntu)
|
||||
uses: jlumbroso/free-disk-space@54081f138730dfa15788a46383842cd2f914a1be
|
||||
with:
|
||||
|
|
@ -454,12 +509,21 @@ jobs:
|
|||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
- name: Set GOPATH
|
||||
run: |
|
||||
echo "GOPATH=$HOME/go" >> $GITHUB_ENV
|
||||
- name: Setup NodeJS
|
||||
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
|
||||
with:
|
||||
# renovate: datasource=node-version packageName=node versioning=node
|
||||
node-version: '24.14.1'
|
||||
- name: Install pnpm
|
||||
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
|
||||
with:
|
||||
package_json_file: ui/package.json
|
||||
- name: GH actions workaround - Kill XSP4 process
|
||||
run: |
|
||||
sudo pkill mono || true
|
||||
|
|
@ -475,11 +539,15 @@ jobs:
|
|||
sudo chown $(whoami) $HOME/.kube/config
|
||||
sudo chmod go-r $HOME/.kube/config
|
||||
kubectl version
|
||||
- name: Restore go build cache
|
||||
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
- name: Restore go build and module cache
|
||||
uses: actions/cache@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
|
||||
with:
|
||||
path: ~/.cache/go-build
|
||||
key: ${{ runner.os }}-go-build-v1-${{ github.run_id }}
|
||||
path: |
|
||||
~/.cache/go-build
|
||||
~/go/pkg/mod
|
||||
key: ${{ runner.os }}-go-build-v1-${{ hashFiles('**/go.sum') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-go-build-v1-
|
||||
- name: Add ~/go/bin to PATH
|
||||
run: |
|
||||
echo "$HOME/go/bin" >> $GITHUB_PATH
|
||||
|
|
@ -489,10 +557,12 @@ jobs:
|
|||
- name: Add ./dist to PATH
|
||||
run: |
|
||||
echo "$(pwd)/dist" >> $GITHUB_PATH
|
||||
- name: Download Go dependencies
|
||||
- name: Download Go modules
|
||||
run: |
|
||||
go mod download
|
||||
go install github.com/mattn/goreman@latest
|
||||
- name: Install goreman
|
||||
run: |
|
||||
go install github.com/mattn/goreman@v0.3.17
|
||||
- name: Install all tools required for building & testing
|
||||
run: |
|
||||
make install-test-tools-local
|
||||
|
|
@ -534,13 +604,13 @@ jobs:
|
|||
goreman run stop-all || echo "goreman trouble"
|
||||
sleep 30
|
||||
- name: Upload e2e coverage report
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: e2e-code-coverage
|
||||
path: /tmp/coverage
|
||||
if: ${{ matrix.k3s.latest }}
|
||||
- name: Upload e2e-server logs
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: e2e-server-k8s${{ matrix.k3s.version }}.log
|
||||
path: /tmp/e2e-server.log
|
||||
|
|
@ -560,6 +630,11 @@ jobs:
|
|||
- changes
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- run: |
|
||||
result="${{ needs.test-e2e.result }}"
|
||||
# mark as successful even if skipped
|
||||
|
|
|
|||
18
.github/workflows/codeql.yml
vendored
18
.github/workflows/codeql.yml
vendored
|
|
@ -28,6 +28,10 @@ concurrency:
|
|||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
CodeQL-Build:
|
||||
permissions:
|
||||
|
|
@ -39,18 +43,24 @@ jobs:
|
|||
# CodeQL runs on ubuntu-latest and windows-latest
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
# Use correct go version. https://github.com/github/codeql-action/issues/1842#issuecomment-1704398087
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version-file: go.mod
|
||||
|
||||
# Initializes the CodeQL tools for scanning.
|
||||
- name: Initialize CodeQL
|
||||
uses: github/codeql-action/init@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
|
||||
uses: github/codeql-action/init@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
|
||||
# Override language selection by uncommenting this and choosing your languages
|
||||
# with:
|
||||
# languages: go, javascript, csharp, python, cpp, java
|
||||
|
|
@ -58,7 +68,7 @@ jobs:
|
|||
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
|
||||
# If this step fails, then you should remove it and run the build manually (see below)
|
||||
- name: Autobuild
|
||||
uses: github/codeql-action/autobuild@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
|
||||
uses: github/codeql-action/autobuild@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
|
||||
|
||||
# ℹ️ Command-line programs to run using the OS shell.
|
||||
# 📚 https://git.io/JvXDl
|
||||
|
|
@ -72,4 +82,4 @@ jobs:
|
|||
# make release
|
||||
|
||||
- name: Perform CodeQL Analysis
|
||||
uses: github/codeql-action/analyze@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
|
||||
uses: github/codeql-action/analyze@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
|
||||
|
|
|
|||
36
.github/workflows/image-reuse.yaml
vendored
36
.github/workflows/image-reuse.yaml
vendored
|
|
@ -45,6 +45,10 @@ on:
|
|||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
publish:
|
||||
permissions:
|
||||
|
|
@ -55,6 +59,12 @@ jobs:
|
|||
outputs:
|
||||
image-digest: ${{ steps.image.outputs.digest }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
@ -67,16 +77,26 @@ jobs:
|
|||
if: ${{ github.ref_type != 'tag'}}
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ inputs.go-version }}
|
||||
cache: false
|
||||
|
||||
- name: Install cosign
|
||||
uses: sigstore/cosign-installer@ba7bc0a3fef59531c69a25acd34668d6d3fe6f22 # v4.1.0
|
||||
uses: sigstore/cosign-installer@cad07c2e89fa2edd6e2d7bab4c1aa38e53f76003 # v4.1.1
|
||||
|
||||
- uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
|
||||
- uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
- name: Setup QEMU
|
||||
uses: docker/setup-qemu-action@ce360397dd3f832beb865e1373c09c0e9f86d70a # v4.0.0
|
||||
with:
|
||||
image: tonistiigi/binfmt@sha256:d3b963f787999e6c0219a48dba02978769286ff61a5f4d26245cb6a6e5567ea3 #qemu-v10.0.4
|
||||
|
||||
- name: Setup Docker Buildx
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
with:
|
||||
# buildkit v0.28.1
|
||||
driver-opts: |
|
||||
image=moby/buildkit@sha256:a82d1ab899cda51aade6fe818d71e4b58c4079e047a0cf29dbb93b2b0465ea69
|
||||
|
||||
|
||||
- name: Setup tags for container image as a CSV type
|
||||
run: |
|
||||
|
|
@ -103,7 +123,7 @@ jobs:
|
|||
echo 'EOF' >> $GITHUB_ENV
|
||||
|
||||
- name: Login to Quay.io
|
||||
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
|
||||
with:
|
||||
registry: quay.io
|
||||
username: ${{ secrets.quay_username }}
|
||||
|
|
@ -111,7 +131,7 @@ jobs:
|
|||
if: ${{ inputs.quay_image_name && inputs.push }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ secrets.ghcr_username }}
|
||||
|
|
@ -119,7 +139,7 @@ jobs:
|
|||
if: ${{ inputs.ghcr_image_name && inputs.push }}
|
||||
|
||||
- name: Login to dockerhub Container Registry
|
||||
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
|
||||
uses: docker/login-action@4907a6ddec9925e35a0a9e82d7399ccc52663121 # v4.1.0
|
||||
with:
|
||||
username: ${{ secrets.docker_username }}
|
||||
password: ${{ secrets.docker_password }}
|
||||
|
|
@ -142,7 +162,7 @@ jobs:
|
|||
|
||||
- name: Build and push container image
|
||||
id: image
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 #v7.0.0
|
||||
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f #v7.1.0
|
||||
with:
|
||||
context: .
|
||||
platforms: ${{ inputs.platforms }}
|
||||
|
|
|
|||
14
.github/workflows/image.yaml
vendored
14
.github/workflows/image.yaml
vendored
|
|
@ -15,6 +15,10 @@ concurrency:
|
|||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
set-vars:
|
||||
permissions:
|
||||
|
|
@ -31,6 +35,12 @@ jobs:
|
|||
ghcr_provenance_image: ${{ steps.image.outputs.ghcr_provenance_image }}
|
||||
allow_ghcr_publish: ${{ steps.image.outputs.allow_ghcr_publish }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Set image tag and names
|
||||
|
|
@ -86,7 +96,7 @@ jobs:
|
|||
with:
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.26.0
|
||||
go-version: 1.26.2
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: false
|
||||
|
||||
|
|
@ -103,7 +113,7 @@ jobs:
|
|||
ghcr_image_name: ${{ needs.set-vars.outputs.ghcr_image_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.26.0
|
||||
go-version: 1.26.2
|
||||
platforms: ${{ needs.set-vars.outputs.platforms }}
|
||||
push: true
|
||||
secrets:
|
||||
|
|
|
|||
10
.github/workflows/init-release.yaml
vendored
10
.github/workflows/init-release.yaml
vendored
|
|
@ -14,6 +14,10 @@ on:
|
|||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
prepare-release:
|
||||
permissions:
|
||||
|
|
@ -28,6 +32,12 @@ jobs:
|
|||
IMAGE_NAMESPACE: ${{ vars.IMAGE_NAMESPACE || 'argoproj' }}
|
||||
IMAGE_REPOSITORY: ${{ vars.IMAGE_REPOSITORY || 'argocd' }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
|
|||
10
.github/workflows/pr-title-check.yml
vendored
10
.github/workflows/pr-title-check.yml
vendored
|
|
@ -6,6 +6,10 @@ on:
|
|||
|
||||
permissions: {}
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
# PR updates can happen in quick succession leading to this
|
||||
# workflow being trigger a number of times. This limits it
|
||||
# to one run per PR.
|
||||
|
|
@ -21,6 +25,12 @@ jobs:
|
|||
name: Validate PR Title
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- uses: thehanimo/pr-title-checker@7fbfe05602bdd86f926d3fb3bccb6f3aed43bc70 # v1.4.3
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
|
|||
63
.github/workflows/release.yaml
vendored
63
.github/workflows/release.yaml
vendored
|
|
@ -11,8 +11,10 @@ permissions: {}
|
|||
|
||||
env:
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
GOLANG_VERSION: '1.26.0' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
|
||||
GOLANG_VERSION: '1.26.2' # Note: go-version must also be set in job argocd-image.with.go-version
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
argocd-image:
|
||||
needs: [setup-variables]
|
||||
|
|
@ -26,7 +28,7 @@ jobs:
|
|||
quay_image_name: ${{ needs.setup-variables.outputs.quay_image_name }}
|
||||
# Note: cannot use env variables to set go-version (https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations)
|
||||
# renovate: datasource=golang-version packageName=golang
|
||||
go-version: 1.26.0
|
||||
go-version: 1.26.2
|
||||
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
|
||||
push: true
|
||||
secrets:
|
||||
|
|
@ -47,6 +49,11 @@ jobs:
|
|||
provenance_image: ${{ steps.var.outputs.provenance_image }}
|
||||
allow_fork_release: ${{ steps.var.outputs.allow_fork_release }}
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
@ -133,7 +140,7 @@ jobs:
|
|||
run: git fetch --force --tags
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
|
|
@ -159,10 +166,10 @@ jobs:
|
|||
tool-cache: false
|
||||
|
||||
- name: Run GoReleaser
|
||||
uses: goreleaser/goreleaser-action@ec59f474b9834571250b370d4735c50f8e2d1e29 # v7.0.0
|
||||
uses: goreleaser/goreleaser-action@e24998b8b67b290c2fa8b7c14fcfa7de2c5c9b8c # v7.1.0
|
||||
id: run-goreleaser
|
||||
with:
|
||||
version: latest
|
||||
version: v2.14.3
|
||||
args: release --clean --timeout 55m
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -218,8 +225,13 @@ jobs:
|
|||
fetch-depth: 0
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Install pnpm
|
||||
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v5.0.0
|
||||
with:
|
||||
package_json_file: ui/package.json
|
||||
|
||||
- name: Setup Golang
|
||||
uses: actions/setup-go@4b73464bb391d4059bd26b0524d20df3927bd417 # v6.3.0
|
||||
uses: actions/setup-go@4a3601121dd01d1626a1e23e37211e3254c1c06c # v6.4.0
|
||||
with:
|
||||
go-version: ${{ env.GOLANG_VERSION }}
|
||||
cache: false
|
||||
|
|
@ -231,28 +243,37 @@ jobs:
|
|||
SPDX_GEN_VERSION: v0.0.13
|
||||
# defines the sigs.k8s.io/bom version to use.
|
||||
SIGS_BOM_VERSION: v0.2.1
|
||||
# comma delimited list of project relative folders to inspect for package
|
||||
# managers (gomod, yarn, npm).
|
||||
PROJECT_FOLDERS: '.,./ui'
|
||||
# full qualified name of the docker image to be inspected
|
||||
DOCKER_IMAGE: ${{ needs.setup-variables.outputs.quay_image_name }}
|
||||
run: |
|
||||
yarn install --cwd ./ui
|
||||
set -euo pipefail
|
||||
pnpm install --dir ./ui --frozen-lockfile
|
||||
go install github.com/spdx/spdx-sbom-generator/cmd/generator@$SPDX_GEN_VERSION
|
||||
go install sigs.k8s.io/bom/cmd/bom@$SIGS_BOM_VERSION
|
||||
|
||||
# Generate SPDX for project dependencies analyzing package managers
|
||||
for folder in $(echo $PROJECT_FOLDERS | sed "s/,/ /g")
|
||||
do
|
||||
generator -p $folder -o /tmp
|
||||
done
|
||||
generator -p . -o /tmp
|
||||
|
||||
# Generate SPDX for binaries analyzing the docker image
|
||||
if [[ ! -z $DOCKER_IMAGE ]]; then
|
||||
bom generate -o /tmp/bom-docker-image.spdx -i $DOCKER_IMAGE
|
||||
# When ui/ should use in-repo pnpm for `pnpm sbom` (11+):
|
||||
# 1. In ui/package.json set "packageManager" to a pnpm 11+ release (e.g. pnpm@11.0.0), then from ./ui run
|
||||
# `pnpm install` and commit the resulting ui/pnpm-lock.yaml so release CI's pnpm/action-setup matches.
|
||||
# 2. Delete hack/generate-ui-pnpm-sbom.sh and remove the ./hack/generate-ui-pnpm-sbom.sh line below.
|
||||
# 3. Uncomment:
|
||||
# pnpm --dir ./ui sbom --sbom-format spdx --prod > /tmp/bom-ui-pnpm.spdx.json
|
||||
./hack/generate-ui-pnpm-sbom.sh --write /tmp/bom-ui-pnpm.spdx.json
|
||||
|
||||
if [[ -n "${DOCKER_IMAGE:-}" ]]; then
|
||||
bom generate -o /tmp/bom-docker-image.spdx -i "${DOCKER_IMAGE}"
|
||||
fi
|
||||
|
||||
cd /tmp && tar -zcf sbom.tar.gz *.spdx
|
||||
cd /tmp
|
||||
shopt -s nullglob
|
||||
spdx_files=( *.spdx )
|
||||
shopt -u nullglob
|
||||
if [[ ${#spdx_files[@]} -eq 0 ]]; then
|
||||
echo "No .spdx files produced under /tmp"
|
||||
exit 1
|
||||
fi
|
||||
tar -zcf sbom.tar.gz "${spdx_files[@]}" bom-ui-pnpm.spdx.json
|
||||
|
||||
- name: Generate SBOM hash
|
||||
shell: bash
|
||||
|
|
@ -264,7 +285,7 @@ jobs:
|
|||
echo "hashes=$(sha256sum /tmp/sbom.tar.gz | base64 -w0)" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Upload SBOM
|
||||
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2.5.0
|
||||
uses: softprops/action-gh-release@b4309332981a82ec1c5618f44dd2e27cc8bfbfda # v3.0.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
|
|
|
|||
34
.github/workflows/renovate.yaml
vendored
34
.github/workflows/renovate.yaml
vendored
|
|
@ -7,14 +7,38 @@ on:
|
|||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
renovate:
|
||||
runs-on: ubuntu-24.04
|
||||
if: github.repository == 'argoproj/argo-cd'
|
||||
steps:
|
||||
- name: Harden the runner (Block unknown outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: block
|
||||
disable-sudo-and-containers: "false" # renovatebot runs in `docker run`
|
||||
allowed-endpoints: >
|
||||
github.com:443
|
||||
api.github.com:443
|
||||
raw.githubusercontent.com:443
|
||||
release-assets.githubusercontent.com:443
|
||||
ghcr.io:443
|
||||
pkg-containers.githubusercontent.com:443
|
||||
hub.docker.com:443
|
||||
proxy.golang.org:443
|
||||
nodejs.org:443
|
||||
pypi.org:443
|
||||
get.helm.sh
|
||||
registry.npmjs.org
|
||||
|
||||
- name: Get token
|
||||
id: get_token
|
||||
uses: actions/create-github-app-token@d72941d797fd3113feb6b93fd0dec494b13a2547 # v1
|
||||
uses: actions/create-github-app-token@1b10c78c7865c340bc4f6099eb2f838309f1e8c3 # v3
|
||||
with:
|
||||
app-id: ${{ vars.RENOVATE_APP_ID }}
|
||||
private-key: ${{ secrets.RENOVATE_APP_PRIVATE_KEY }}
|
||||
|
|
@ -22,11 +46,17 @@ jobs:
|
|||
- name: Checkout
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # 6.0.2
|
||||
|
||||
# Renovate do not pin their docker image versions to SHA, so
|
||||
# when bumping renovate action version please check if renovate image
|
||||
# has been updated (see it's numeric version in action.yaml)
|
||||
# and update `renovate-version` parameter accordingly
|
||||
- name: Self-hosted Renovate
|
||||
uses: renovatebot/github-action@abd08c7549b2a864af5df4a2e369c43f035a6a9d #46.1.5
|
||||
uses: renovatebot/github-action@83ec54fee49ab67d9cd201084c1ff325b4b462e4 #46.1.10
|
||||
with:
|
||||
configurationFile: .github/configs/renovate-config.js
|
||||
token: '${{ steps.get_token.outputs.token }}'
|
||||
renovate-image: "ghcr.io/renovatebot/renovate@sha256"
|
||||
renovate-version: "5dfeab680f40edd2713b8fcae574824e60d2c831b8d89cc965e51621894c7084" #43
|
||||
env:
|
||||
LOG_LEVEL: 'debug'
|
||||
RENOVATE_REPOSITORIES: '${{ github.repository }}'
|
||||
|
|
|
|||
10
.github/workflows/scorecard.yaml
vendored
10
.github/workflows/scorecard.yaml
vendored
|
|
@ -29,6 +29,12 @@ jobs:
|
|||
if: github.repository == 'argoproj/argo-cd'
|
||||
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
|
||||
- name: "Checkout code"
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
@ -54,7 +60,7 @@ jobs:
|
|||
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
|
||||
# format to the repository Actions tab.
|
||||
- name: "Upload artifact"
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@043fb46d1a93c77aae656e7c1c64a875d1fc6a0a # v7.0.1
|
||||
with:
|
||||
name: SARIF file
|
||||
path: results.sarif
|
||||
|
|
@ -62,6 +68,6 @@ jobs:
|
|||
|
||||
# Upload the results to GitHub's code scanning dashboard.
|
||||
- name: "Upload to code-scanning"
|
||||
uses: github/codeql-action/upload-sarif@8fcfedf57053e09257688fce7a0beeb18b1b9ae3 # v2.17.2
|
||||
uses: github/codeql-action/upload-sarif@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4.35.2
|
||||
with:
|
||||
sarif_file: results.sarif
|
||||
|
|
|
|||
13
.github/workflows/stale.yaml
vendored
13
.github/workflows/stale.yaml
vendored
|
|
@ -8,10 +8,23 @@ permissions:
|
|||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Block unknown outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: block
|
||||
disable-sudo-and-containers: "true"
|
||||
allowed-endpoints: >
|
||||
api.github.com:443
|
||||
|
||||
- uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f # v10.2.0
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
|
|||
10
.github/workflows/update-snyk.yaml
vendored
10
.github/workflows/update-snyk.yaml
vendored
|
|
@ -7,6 +7,10 @@ on:
|
|||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
# a workaround to disable harden runner
|
||||
STEP_SECURITY_HARDEN_RUNNER: ${{ vars.disable_harden_runner }}
|
||||
|
||||
jobs:
|
||||
snyk-report:
|
||||
permissions:
|
||||
|
|
@ -16,6 +20,12 @@ jobs:
|
|||
name: Update Snyk report in the docs directory
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- name: Harden the runner (Audit all outbound calls)
|
||||
if: ${{ vars.disable_harden_runner != 'true' }}
|
||||
uses: step-security/harden-runner@8d3c67de8e2fe68ef647c8db1e6a09f647780f40 # v2.19.0
|
||||
with:
|
||||
egress-policy: audit
|
||||
agent-enabled: "false"
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
|
|
|
|||
5
.gitignore
vendored
5
.gitignore
vendored
|
|
@ -16,6 +16,8 @@ coverage.out
|
|||
test-results
|
||||
.scannerwork
|
||||
.scratch
|
||||
# pnpm SBOM helper (hack/generate-ui-pnpm-sbom.sh) download cache — remove this line when that script is deleted.
|
||||
hack/.cache/
|
||||
node_modules/
|
||||
.kube/
|
||||
./test/cmp/*.sock
|
||||
|
|
@ -24,6 +26,9 @@ node_modules/
|
|||
.*.swp
|
||||
rerunreport.txt
|
||||
|
||||
# AI tools support
|
||||
CLAUDE.local.md
|
||||
|
||||
# ignore built binaries
|
||||
cmd/argocd/argocd
|
||||
cmd/argocd-application-controller/argocd-application-controller
|
||||
|
|
|
|||
|
|
@ -145,16 +145,19 @@ linters:
|
|||
strconcat: true
|
||||
|
||||
revive:
|
||||
enable-all-rules: false
|
||||
enable-default-rules: true
|
||||
max-open-files: 2048
|
||||
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
|
||||
rules:
|
||||
- name: bool-literal-in-expr
|
||||
|
||||
- name: blank-imports
|
||||
disabled: true
|
||||
|
||||
- name: bool-literal-in-expr
|
||||
|
||||
- name: context-as-argument
|
||||
arguments:
|
||||
- allowTypesBefore: '*testing.T,testing.TB'
|
||||
- allow-types-before: '*testing.T,testing.TB'
|
||||
|
||||
- name: context-keys-type
|
||||
disabled: true
|
||||
|
|
@ -166,14 +169,11 @@ linters:
|
|||
|
||||
- name: early-return
|
||||
arguments:
|
||||
- preserveScope
|
||||
- preserve-scope
|
||||
|
||||
- name: empty-block
|
||||
disabled: true
|
||||
|
||||
- name: error-naming
|
||||
disabled: true
|
||||
|
||||
- name: error-return
|
||||
|
||||
- name: error-strings
|
||||
|
|
@ -181,6 +181,9 @@ linters:
|
|||
|
||||
- name: errorf
|
||||
|
||||
- name: exported
|
||||
disabled: true
|
||||
|
||||
- name: identical-branches
|
||||
|
||||
- name: if-return
|
||||
|
|
@ -189,7 +192,7 @@ linters:
|
|||
|
||||
- name: indent-error-flow
|
||||
arguments:
|
||||
- preserveScope
|
||||
- preserve-scope
|
||||
|
||||
- name: modifies-parameter
|
||||
|
||||
|
|
@ -206,7 +209,7 @@ linters:
|
|||
|
||||
- name: superfluous-else
|
||||
arguments:
|
||||
- preserveScope
|
||||
- preserve-scope
|
||||
|
||||
- name: time-equal
|
||||
|
||||
|
|
@ -216,6 +219,8 @@ linters:
|
|||
- name: unexported-return
|
||||
disabled: true
|
||||
|
||||
- name: unnecessary-format
|
||||
|
||||
- name: unnecessary-stmt
|
||||
|
||||
- name: unreachable-code
|
||||
|
|
@ -232,8 +237,8 @@ linters:
|
|||
arguments:
|
||||
- - ID
|
||||
- - VM
|
||||
- - skipPackageNameChecks: true
|
||||
upperCaseConst: true
|
||||
- - skip-initialism-name-checks: true
|
||||
upper-case-const: true
|
||||
|
||||
staticcheck:
|
||||
checks:
|
||||
|
|
@ -255,7 +260,4 @@ linters:
|
|||
usetesting:
|
||||
os-mkdir-temp: false
|
||||
|
||||
output:
|
||||
show-stats: false
|
||||
|
||||
version: "2"
|
||||
|
|
|
|||
53
AGENTS.md
Normal file
53
AGENTS.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
# AI Agent Directives for Argo CD
|
||||
|
||||
**ATTENTION AI AGENTS, LLMs, AND AUTONOMOUS CODING TOOLS:**
|
||||
This repository (argoproj/argo-cd) enforces strict contribution guidelines. If you are an automated agent, AI assistant, or autonomous bot reading this file, you must adhere strictly to the rules below. Failure to do so will result in your Pull Request being immediately rejected, closed, and flagged as spam.
|
||||
|
||||
## 1. Zero-Spam & PR Authorization Policy
|
||||
* **Require an Issue:** DO NOT create a Pull Request unless there is an existing, open, and approved GitHub Issue that explicitly requests this work.
|
||||
* **No "Drive-By" Refactoring:** Do not submit unsolicited PRs that only contain minor stylistic changes, variable renames, or typo fixes across the codebase unless tied to an approved `chore` issue.
|
||||
* **No Hallucinated URLs:** Do not include fabricated links, hallucinated documentation, or fake GitHub usernames in the PR description or code comments. Please double-check any link, quote or code block that is included into the PR.
|
||||
|
||||
## 2. Argo CD Contribution Requirements
|
||||
Argo CD is a CNCF Graduated project. All code must meet the following standards:
|
||||
|
||||
* **Semantic PR Titles:** You must use Semantic Pull Request formatting for your PR title. Valid prefixes are:
|
||||
* `ci:` - Updates or improvements for the Continuous Integration workflows
|
||||
* `fix:` - Bug fixes
|
||||
* `feat:` - New features
|
||||
* `test:` - Addition of tests to the code base, or improvements of existing ones
|
||||
* `docs:` - Documentation improvements
|
||||
* `chore:` - Internals, build processes, unit tests, etc.
|
||||
* `refactor:` - Refactoring of the code base, without adding new features or fixing bugs
|
||||
* `revert:` - Reverts a previous commit
|
||||
* **PR Templates:** You must fully complete the Argo CD Pull Request template. Do not delete the template sections or leave them blank.
|
||||
|
||||
## 3. Tech Stack & Code Rules
|
||||
* **Backend (Go):** The backend is written in Go. The minimum supported Go version is strictly enforced. You must use `go modules` for dependency management.
|
||||
* **UI (React/TypeScript):** The frontend is written in React and TypeScript.
|
||||
* **Kubernetes Manifests:** Argo CD heavily relies on Kubernetes manifests and CRDs. If you modify API structs, you MUST regenerate the manifests and API glue code.
|
||||
* **Tests** Argo CD relies on automatic tests. If your PR adds new functionality or in any way modifies program behaviour, please add/change relevant unit and e2e tests. In those cases when it is not feasible or possible please document the reasons in the PR comment.
|
||||
|
||||
## 4. Required Local Checks (Do This Before Committing)
|
||||
Do not finalize your code or suggest a commit to your user without ensuring the following `make` targets pass successfully. Argo CD uses a heavy CI pipeline, and failing these basic checks wastes project resources:
|
||||
|
||||
1. **Build the Code:** `make build`
|
||||
2. **Generate API Code & Manifests:** `make codegen` *(CRITICAL: Must be run if any API structs are changed)*
|
||||
3. **Linting:** `make lint` and `make lint-ui`
|
||||
4. **Testing:** `make test`
|
||||
5. **CLI Build:** `make cli`
|
||||
|
||||
If any of these commands fail, you must fix the errors before proceeding.
|
||||
|
||||
## 5. Documentation (`docs/`)
|
||||
If you are modifying or adding a feature, you must also update the corresponding documentation.
|
||||
* Write in clear, direct English.
|
||||
* Use GitHub style admonition blocks (e.g., `> [!NOTE]`, `> [!WARNING]`) compatible with MkDocs Material.
|
||||
* Code examples in documentation must be complete, accurate, and include the language identifier for syntax highlighting (e.g., ````yaml`).
|
||||
|
||||
## Summary of Agent Workflow
|
||||
1. Verify an open issue exists.
|
||||
2. Write code matching Argo CD's Go/React standards.
|
||||
3. Run `make codegen`, `make lint`, and `make test`.
|
||||
4. Format the PR title properly (e.g., `fix: resolve OutOfSync bug on PostDelete hook (#12345)`).
|
||||
|
||||
1
CLAUDE.md
Normal file
1
CLAUDE.md
Normal file
|
|
@ -0,0 +1 @@
|
|||
@AGENTS.md
|
||||
13
Dockerfile
13
Dockerfile
|
|
@ -4,7 +4,7 @@ ARG BASE_IMAGE=docker.io/library/ubuntu:25.10@sha256:4a9232cc47bf99defcc8860ef62
|
|||
# Initial stage which pulls prepares build dependencies and CLI tooling we need for our final image
|
||||
# Also used as the image in CI jobs so needs all dependencies
|
||||
####################################################################################################
|
||||
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS builder
|
||||
FROM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36 AS builder
|
||||
|
||||
WORKDIR /tmp
|
||||
|
||||
|
|
@ -92,25 +92,24 @@ WORKDIR /home/argocd
|
|||
####################################################################################################
|
||||
# Argo CD UI stage
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:23.0.0@sha256:9d09fa506f5b8465c5221cbd6f980e29ae0ce9a3119e2b9bc0842e6a3f37bb59 AS argocd-ui
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/node:24.14.1@sha256:80fc934952c8f1b2b4d39907af7211f8a9fff1a4c2cf673fb49099292c251cec AS argocd-ui
|
||||
|
||||
WORKDIR /src
|
||||
COPY ["ui/package.json", "ui/yarn.lock", "./"]
|
||||
COPY ["ui/package.json", "ui/pnpm-lock.yaml", "./"]
|
||||
|
||||
RUN yarn install --network-timeout 200000 && \
|
||||
yarn cache clean
|
||||
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install --frozen-lockfile
|
||||
|
||||
COPY ["ui/", "."]
|
||||
|
||||
ARG ARGO_VERSION=latest
|
||||
ENV ARGO_VERSION=$ARGO_VERSION
|
||||
ARG TARGETARCH
|
||||
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 yarn build
|
||||
RUN HOST_ARCH=$TARGETARCH NODE_ENV='production' NODE_ONLINE_ENV='online' NODE_OPTIONS=--max_old_space_size=8192 pnpm build
|
||||
|
||||
####################################################################################################
|
||||
# Argo CD Build stage which performs the actual build of Argo CD binaries
|
||||
####################################################################################################
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84 AS argocd-build
|
||||
FROM --platform=$BUILDPLATFORM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36 AS argocd-build
|
||||
|
||||
WORKDIR /go/src/github.com/argoproj/argo-cd
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
FROM docker.io/library/golang:1.26.0@sha256:fb612b7831d53a89cbc0aaa7855b69ad7b0caf603715860cf538df854d047b84
|
||||
FROM docker.io/library/golang:1.26.2@sha256:5f3787b7f902c07c7ec4f3aa91a301a3eda8133aa32661a3b3a3a86ab3a68a36
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
FROM node:20
|
||||
FROM node:24.14.1@sha256:80fc934952c8f1b2b4d39907af7211f8a9fff1a4c2cf673fb49099292c251cec
|
||||
|
||||
WORKDIR /app/ui
|
||||
|
||||
COPY ui /app/ui
|
||||
|
||||
RUN yarn install
|
||||
RUN npm install -g corepack@0.34.6 && corepack enable && pnpm install --frozen-lockfile
|
||||
|
||||
ENTRYPOINT ["pnpm", "start"]
|
||||
|
||||
ENTRYPOINT ["yarn", "start"]
|
||||
|
|
@ -20,7 +20,7 @@ This document lists the maintainers of the Argo CD project.
|
|||
| Christian Hernandez | [christianh814](https://github.com/christianh814) | Reviewer(docs) | [Akuity](https://akuity.io/) |
|
||||
| Peter Jiang | [pjiang-dev](https://github.com/pjiang-dev) | Approver(docs) | [Intuit](https://www.intuit.com/) |
|
||||
| Andrii Korotkov | [andrii-korotkov](https://github.com/andrii-korotkov) | Reviewer | [Verkada](https://www.verkada.com/) |
|
||||
| Pasha Kostohrys | [pasha-codefresh](https://github.com/pasha-codefresh) | Approver | [Codefresh](https://www.github.com/codefresh/) |
|
||||
| Pasha Kostohrys | [pasha-codefresh](https://github.com/pasha-codefresh) | Approver | [Octopus Deploy](https://octopus.com/) |
|
||||
| Nitish Kumar | [nitishfy](https://github.com/nitishfy) | Approver(cli,docs) | [Akuity](https://akuity.io/) |
|
||||
| Justin Marquis | [34fathombelow](https://github.com/34fathombelow) | Approver(docs/ci) | [Akuity](https://akuity.io/) |
|
||||
| Alexander Matyushentsev | [alexmt](https://github.com/alexmt) | Lead | [Akuity](https://akuity.io/) |
|
||||
|
|
|
|||
19
Makefile
19
Makefile
|
|
@ -74,7 +74,7 @@ ARGOCD_E2E_APISERVER_PORT?=8080
|
|||
ARGOCD_E2E_REPOSERVER_PORT?=8081
|
||||
ARGOCD_E2E_REDIS_PORT?=6379
|
||||
ARGOCD_E2E_DEX_PORT?=5556
|
||||
ARGOCD_E2E_YARN_HOST?=localhost
|
||||
ARGOCD_E2E_JS_HOST?=localhost
|
||||
ARGOCD_E2E_DISABLE_AUTH?=
|
||||
ARGOCD_E2E_DIR?=/tmp/argo-e2e
|
||||
|
||||
|
|
@ -113,7 +113,7 @@ define run-in-test-server
|
|||
-e GOCACHE=/tmp/go-build-cache \
|
||||
-e ARGOCD_IN_CI=$(ARGOCD_IN_CI) \
|
||||
-e ARGOCD_E2E_TEST=$(ARGOCD_E2E_TEST) \
|
||||
-e ARGOCD_E2E_YARN_HOST=$(ARGOCD_E2E_YARN_HOST) \
|
||||
-e ARGOCD_E2E_JS_HOST=$(ARGOCD_E2E_JS_HOST) \
|
||||
-e ARGOCD_E2E_DISABLE_AUTH=$(ARGOCD_E2E_DISABLE_AUTH) \
|
||||
-e ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} \
|
||||
-e ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} \
|
||||
|
|
@ -419,7 +419,7 @@ lint-ui: test-tools-image
|
|||
|
||||
.PHONY: lint-ui-local
|
||||
lint-ui-local:
|
||||
cd ui && yarn lint
|
||||
cd ui && pnpm lint
|
||||
|
||||
# Build all Go code
|
||||
.PHONY: build
|
||||
|
|
@ -487,7 +487,7 @@ test-e2e:
|
|||
test-e2e-local: cli-local
|
||||
# NO_PROXY ensures all tests don't go out through a proxy if one is configured on the test system
|
||||
export GO111MODULE=off
|
||||
DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
|
||||
ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=$${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} DIST_DIR=${DIST_DIR} RERUN_FAILS=$(ARGOCD_E2E_RERUN_FAILS) PACKAGES="./test/e2e" ARGOCD_E2E_RECORD=${ARGOCD_E2E_RECORD} ARGOCD_CONFIG_DIR=$(HOME)/.config/argocd-e2e ARGOCD_GPG_ENABLED=true NO_PROXY=* ./hack/test.sh -timeout $(ARGOCD_E2E_TEST_TIMEOUT) -v -args -test.gocoverdir="$(PWD)/test-results"
|
||||
|
||||
# Spawns a shell in the test server container for debugging purposes
|
||||
debug-test-server: test-tools-image
|
||||
|
|
@ -662,8 +662,17 @@ install-go-tools-local:
|
|||
dep-ui: test-tools-image
|
||||
$(call run-in-test-client,make dep-ui-local)
|
||||
|
||||
.PHONY: dep-ui-local
|
||||
dep-ui-local:
|
||||
cd ui && yarn install
|
||||
cd ui && pnpm install --frozen-lockfile
|
||||
|
||||
.PHONY: run-pnpm
|
||||
run-pnpm: test-tools-image
|
||||
$(call run-in-test-client,make 'PNPM_COMMAND=$(PNPM_COMMAND)' run-pnpm-local)
|
||||
|
||||
.PHONY: run-pnpm-local
|
||||
run-pnpm-local:
|
||||
cd ui && pnpm $(PNPM_COMMAND)
|
||||
|
||||
start-test-k8s:
|
||||
go run ./hack/k8s
|
||||
|
|
|
|||
6
Procfile
6
Procfile
|
|
@ -2,13 +2,13 @@ controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run
|
|||
api-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/api-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-server $COMMAND --loglevel debug --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --disable-auth=${ARGOCD_E2E_DISABLE_AUTH:-'true'} --insecure --dex-server http://localhost:${ARGOCD_E2E_DEX_PORT:-5556} --repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081} --port ${ARGOCD_E2E_APISERVER_PORT:-8080} --otlp-address=${ARGOCD_OTLP_ADDRESS} --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --hydrator-enabled=${ARGOCD_HYDRATOR_ENABLED:='false'}"
|
||||
dex: sh -c "ARGOCD_BINARY_NAME=argocd-dex go run github.com/argoproj/argo-cd/v3/cmd gendexcfg -o `pwd`/dist/dex.yaml && (test -f dist/dex.yaml || { echo 'Failed to generate dex configuration'; exit 1; }) && docker run --rm -p ${ARGOCD_E2E_DEX_PORT:-5556}:${ARGOCD_E2E_DEX_PORT:-5556} -v `pwd`/dist/dex.yaml:/dex.yaml ghcr.io/dexidp/dex:$(grep "image: ghcr.io/dexidp/dex:v2.45.0" manifests/base/dex/argocd-dex-server-deployment.yaml | cut -d':' -f3) dex serve /dex.yaml"
|
||||
redis: hack/start-redis-with-password.sh
|
||||
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=./dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
repo-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "export PATH=\$(pwd)/dist:\$PATH && [ -n \"\$ARGOCD_GIT_CONFIG\" ] && export GIT_CONFIG_GLOBAL=\$ARGOCD_GIT_CONFIG && export GIT_CONFIG_NOSYSTEM=1; GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/repo-server} FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_GNUPGHOME=${ARGOCD_GNUPGHOME:-/tmp/argocd-local/gpg/keys} ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} ARGOCD_GPG_DATA_PATH=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source} ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-repo-server ARGOCD_GPG_ENABLED=${ARGOCD_GPG_ENABLED:-false} $COMMAND --loglevel debug --port ${ARGOCD_E2E_REPOSERVER_PORT:-8081} --redis localhost:${ARGOCD_E2E_REDIS_PORT:-6379} --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
cmp-server: [ "$ARGOCD_E2E_TEST" = 'true' ] && exit 0 || [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "FORCE_LOG_COLORS=1 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_BINARY_NAME=argocd-cmp-server ARGOCD_PLUGINSOCKFILEPATH=${ARGOCD_PLUGINSOCKFILEPATH:-./test/cmp} $COMMAND --config-dir-path ./test/cmp --loglevel debug --otlp-address=${ARGOCD_OTLP_ADDRESS}"
|
||||
commit-server: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/commit-server} FORCE_LOG_COLORS=1 ARGOCD_BINARY_NAME=argocd-commit-server $COMMAND --loglevel debug --port ${ARGOCD_E2E_COMMITSERVER_PORT:-8086}"
|
||||
ui: sh -c 'cd ui && ${ARGOCD_E2E_YARN_CMD:-yarn} start'
|
||||
ui: sh -c 'cd ui && ${ARGOCD_E2E_PNPM_CMD:-pnpm} start'
|
||||
git-server: test/fixture/testrepos/start-git.sh
|
||||
helm-registry: test/fixture/testrepos/start-helm-registry.sh
|
||||
oci-registry: test/fixture/testrepos/start-authenticated-helm-registry.sh
|
||||
dev-mounter: [ "$ARGOCD_E2E_TEST" != "true" ] && go run hack/dev-mounter/main.go --configmap argocd-ssh-known-hosts-cm=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} --configmap argocd-tls-certs-cm=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} --configmap argocd-gpg-keys-cm=${ARGOCD_GPG_DATA_PATH:-/tmp/argocd-local/gpg/source}
|
||||
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
applicationset-controller: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/applicationset-controller} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_SSH_DATA_PATH=${ARGOCD_SSH_DATA_PATH:-/tmp/argocd-local/ssh} ARGOCD_BINARY_NAME=argocd-applicationset-controller ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS=${ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_PROGRESSIVE_SYNCS:-true} $COMMAND --loglevel debug --metrics-addr localhost:12345 --probe-addr localhost:12346 --argocd-repo-server localhost:${ARGOCD_E2E_REPOSERVER_PORT:-8081}"
|
||||
notification: [ "$BIN_MODE" = 'true' ] && COMMAND=./dist/argocd || COMMAND='go run ./cmd/main.go' && sh -c "GOCOVERDIR=${ARGOCD_COVERAGE_DIR:-/tmp/coverage/notification} FORCE_LOG_COLORS=4 ARGOCD_FAKE_IN_CLUSTER=true ARGOCD_TLS_DATA_PATH=${ARGOCD_TLS_DATA_PATH:-/tmp/argocd-local/tls} ARGOCD_BINARY_NAME=argocd-notifications $COMMAND --loglevel debug --application-namespaces=${ARGOCD_APPLICATION_NAMESPACES:-''} --self-service-notification-enabled=${ARGOCD_NOTIFICATION_CONTROLLER_SELF_SERVICE_NOTIFICATION_ENABLED:-'false'}"
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@
|
|||
|
||||
## What is Argo CD?
|
||||
|
||||
Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
|
||||
Argo CD is a declarative GitOps continuous delivery tool for Kubernetes.
|
||||
|
||||

|
||||
|
||||
|
|
@ -45,7 +45,7 @@ Check live demo at https://cd.apps.argoproj.io/.
|
|||
|
||||
You can reach the Argo CD community and developers via the following channels:
|
||||
|
||||
* Q & A : [Github Discussions](https://github.com/argoproj/argo-cd/discussions)
|
||||
* Q & A : [GitHub Discussions](https://github.com/argoproj/argo-cd/discussions)
|
||||
* Chat : [The #argo-cd Slack channel](https://argoproj.github.io/community/join-slack)
|
||||
* Contributors Office Hours: [Every Thursday](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8)
|
||||
* User Community meeting: [First Wednesday of the month](https://calendar.google.com/calendar/u/0/embed?src=argoproj@gmail.com) | [Agenda](https://docs.google.com/document/d/1ttgw98MO45Dq7ZUHpIiOIEfbyeitKHNfMjbY5dLLMKQ)
|
||||
|
|
|
|||
|
|
@ -3,9 +3,9 @@ header:
|
|||
expiration-date: '2024-10-31T00:00:00.000Z' # One year from initial release.
|
||||
last-updated: '2023-10-27'
|
||||
last-reviewed: '2023-10-27'
|
||||
commit-hash: 814db444c36503851dc3d45cf9c44394821ca1a4
|
||||
commit-hash: d91a2ab3bf1b1143fb273fa06f54073fc78f41f1
|
||||
project-url: https://github.com/argoproj/argo-cd
|
||||
project-release: v3.4.0
|
||||
project-release: v3.5.0
|
||||
changelog: https://github.com/argoproj/argo-cd/releases
|
||||
license: https://github.com/argoproj/argo-cd/blob/master/LICENSE
|
||||
project-lifecycle:
|
||||
|
|
|
|||
19
SECURITY.md
19
SECURITY.md
|
|
@ -80,24 +80,7 @@ We will publish security advisories using the
|
|||
feature to keep our community well-informed, and will credit you for your
|
||||
findings (unless you prefer to stay anonymous, of course).
|
||||
|
||||
There are two ways to report a vulnerability to the Argo CD team:
|
||||
|
||||
* By opening a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
|
||||
* By e-mail to the following address: cncf-argo-security@lists.cncf.io
|
||||
|
||||
## Internet Bug Bounty collaboration
|
||||
|
||||
We're happy to announce that the Argo project is collaborating with the great
|
||||
folks over at
|
||||
[Hacker One](https://hackerone.com/) and their
|
||||
[Internet Bug Bounty program](https://hackerone.com/ibb)
|
||||
to reward the awesome people who find security vulnerabilities in the four
|
||||
main Argo projects (CD, Events, Rollouts and Workflows) and then work with
|
||||
us to fix and disclose them in a responsible manner.
|
||||
|
||||
If you report a vulnerability to us as outlined in this security policy, we
|
||||
will work together with you to find out whether your finding is eligible for
|
||||
claiming a bounty, and also on how to claim it.
|
||||
To report a vulnerability to the Argo CD team a draft GitHub security advisory: https://github.com/argoproj/argo-cd/security/advisories/new
|
||||
|
||||
## Securing your Argo CD Instance
|
||||
|
||||
|
|
|
|||
8
Tiltfile
8
Tiltfile
|
|
@ -257,11 +257,11 @@ k8s_resource(
|
|||
# ui dependencies
|
||||
local_resource(
|
||||
'node-modules',
|
||||
'yarn',
|
||||
'pnpm install',
|
||||
dir='ui',
|
||||
deps = [
|
||||
'ui/package.json',
|
||||
'ui/yarn.lock',
|
||||
'ui/pnpm-lock.yaml',
|
||||
],
|
||||
allow_parallel=True,
|
||||
)
|
||||
|
|
@ -271,11 +271,11 @@ docker_build(
|
|||
'argocd-ui',
|
||||
context='.',
|
||||
dockerfile='Dockerfile.ui.tilt',
|
||||
entrypoint=['sh', '-c', 'cd /app/ui && yarn start'],
|
||||
entrypoint=['sh', '-c', 'cd /app/ui && pnpm start'],
|
||||
only=['ui'],
|
||||
live_update=[
|
||||
sync('ui', '/app/ui'),
|
||||
run('sh -c "cd /app/ui && yarn install"', trigger=['/app/ui/package.json', '/app/ui/yarn.lock']),
|
||||
run('sh -c "cd /app/ui && pnpm install --frozen-lockfile"', trigger=['/app/ui/package.json', '/app/ui/pnpm-lock.yaml']),
|
||||
],
|
||||
)
|
||||
|
||||
|
|
|
|||
5
USERS.md
5
USERS.md
|
|
@ -65,6 +65,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
|||
1. [Candis](https://www.candis.io)
|
||||
1. [Capital One](https://www.capitalone.com)
|
||||
1. [Capptain LTD](https://capptain.co/)
|
||||
1. [Car & Classic](https://www.carandclassic.com)
|
||||
1. [CARFAX Europe](https://www.carfax.eu)
|
||||
1. [CARFAX](https://www.carfax.com)
|
||||
1. [Carrefour Group](https://www.carrefour.com)
|
||||
|
|
@ -76,6 +77,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
|||
1. [Chime](https://www.chime.com)
|
||||
1. [Chronicle Labs](https://chroniclelabs.org)
|
||||
1. [C.H.Robinson ](https://www.chrobinson.com)
|
||||
1. [Circle](https://circle.com/)
|
||||
1. [Cisco ET&I](https://eti.cisco.com/)
|
||||
1. [Close](https://www.close.com/)
|
||||
1. [Cloud Posse](https://www.cloudposse.com/)
|
||||
|
|
@ -240,6 +242,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
|||
1. [Mission Lane](https://missionlane.com)
|
||||
1. [mixi Group](https://mixi.co.jp/)
|
||||
1. [Moengage](https://www.moengage.com/)
|
||||
1. [Mollie](https://www.mollie.com/)
|
||||
1. [Money Forward](https://corp.moneyforward.com/en/)
|
||||
1. [MongoDB](https://www.mongodb.com/)
|
||||
1. [MOO Print](https://www.moo.com/)
|
||||
|
|
@ -296,6 +299,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
|||
1. [Pismo](https://pismo.io/)
|
||||
1. [PITS Globale Datenrettungsdienste](https://www.pitsdatenrettung.de/)
|
||||
1. [Platform9 Systems](https://platform9.com/)
|
||||
1. [Playground Tech](https://playgroundgroup.io)
|
||||
1. [Polarpoint.io](https://polarpoint.io)
|
||||
1. [Pollinate](https://www.pollinate.global)
|
||||
1. [PostFinance](https://github.com/postfinance)
|
||||
|
|
@ -380,6 +384,7 @@ Currently, the following organizations are **officially** using Argo CD:
|
|||
1. [Tailor Brands](https://www.tailorbrands.com)
|
||||
1. [Tamkeen Technologies](https://tamkeentech.sa/)
|
||||
1. [TBC Bank](https://tbcbank.ge/)
|
||||
1. [Techcom Securities](https://www.tcbs.com.vn/)
|
||||
1. [Techcombank](https://www.techcombank.com.vn/trang-chu)
|
||||
1. [Technacy](https://www.technacy.it/)
|
||||
1. [Telavita](https://www.telavita.com.br/)
|
||||
|
|
|
|||
2
VERSION
2
VERSION
|
|
@ -1 +1 @@
|
|||
3.4.0
|
||||
3.5.0
|
||||
|
|
|
|||
|
|
@ -24,11 +24,13 @@ import (
|
|||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"golang.org/x/sync/errgroup"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
|
@ -74,6 +76,9 @@ const (
|
|||
ReconcileRequeueOnValidationError = time.Minute * 3
|
||||
ReverseDeletionOrder = "Reverse"
|
||||
AllAtOnceDeletionOrder = "AllAtOnce"
|
||||
revisionAndSpecChangedMsg = "Application has pending changes (revision and spec differ), setting status to Waiting"
|
||||
revisionChangedMsg = "Application has pending changes, setting status to Waiting"
|
||||
specChangedMsg = "Application has pending changes (spec differs), setting status to Waiting"
|
||||
)
|
||||
|
||||
var defaultPreservedFinalizers = []string{
|
||||
|
|
@ -103,15 +108,16 @@ type ApplicationSetReconciler struct {
|
|||
Policy argov1alpha1.ApplicationsSyncPolicy
|
||||
EnablePolicyOverride bool
|
||||
utils.Renderer
|
||||
ArgoCDNamespace string
|
||||
ApplicationSetNamespaces []string
|
||||
EnableProgressiveSyncs bool
|
||||
SCMRootCAPath string
|
||||
GlobalPreservedAnnotations []string
|
||||
GlobalPreservedLabels []string
|
||||
Metrics *metrics.ApplicationsetMetrics
|
||||
MaxResourcesStatusCount int
|
||||
ClusterInformer *settings.ClusterInformer
|
||||
ArgoCDNamespace string
|
||||
ApplicationSetNamespaces []string
|
||||
EnableProgressiveSyncs bool
|
||||
SCMRootCAPath string
|
||||
GlobalPreservedAnnotations []string
|
||||
GlobalPreservedLabels []string
|
||||
Metrics *metrics.ApplicationsetMetrics
|
||||
MaxResourcesStatusCount int
|
||||
ClusterInformer *settings.ClusterInformer
|
||||
ConcurrentApplicationUpdates int
|
||||
}
|
||||
|
||||
// +kubebuilder:rbac:groups=argoproj.io,resources=applicationsets,verbs=get;list;watch;create;update;patch;delete
|
||||
|
|
@ -688,108 +694,133 @@ func (r *ApplicationSetReconciler) SetupWithManager(mgr ctrl.Manager, enableProg
|
|||
// - For existing application, it will call update
|
||||
// The function also adds owner reference to all applications, and uses it to delete them.
|
||||
func (r *ApplicationSetReconciler) createOrUpdateInCluster(ctx context.Context, logCtx *log.Entry, applicationSet argov1alpha1.ApplicationSet, desiredApplications []argov1alpha1.Application) error {
|
||||
var firstError error
|
||||
// Creates or updates the application in appList
|
||||
for _, generatedApp := range desiredApplications {
|
||||
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
|
||||
// Build the diff config once per reconcile.
|
||||
// Diff config is per applicationset, so generate it once for all applications
|
||||
diffConfig, err := utils.BuildIgnoreDiffConfig(applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to build ignore diff config: %w", err)
|
||||
}
|
||||
|
||||
g, ctx := errgroup.WithContext(ctx)
|
||||
concurrency := r.concurrency()
|
||||
g.SetLimit(concurrency)
|
||||
|
||||
var appErrorsMu sync.Mutex
|
||||
appErrors := map[string]error{}
|
||||
|
||||
for _, generatedApp := range desiredApplications {
|
||||
// Normalize to avoid fighting with the application controller.
|
||||
generatedApp.Spec = *argoutil.NormalizeApplicationSpec(&generatedApp.Spec)
|
||||
g.Go(func() error {
|
||||
appLog := logCtx.WithFields(applog.GetAppLogFields(&generatedApp))
|
||||
|
||||
found := &argov1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: generatedApp.Name,
|
||||
Namespace: generatedApp.Namespace,
|
||||
},
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
}
|
||||
|
||||
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, applicationSet.Spec.IgnoreApplicationDifferences, normalizers.IgnoreNormalizerOpts{}, found, func() error {
|
||||
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
|
||||
found.Spec = generatedApp.Spec
|
||||
|
||||
// allow setting the Operation field to trigger a sync operation on an Application
|
||||
if generatedApp.Operation != nil {
|
||||
found.Operation = generatedApp.Operation
|
||||
found := &argov1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: generatedApp.Name,
|
||||
Namespace: generatedApp.Namespace,
|
||||
},
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
}
|
||||
|
||||
preservedAnnotations := make([]string, 0)
|
||||
preservedLabels := make([]string, 0)
|
||||
action, err := utils.CreateOrUpdate(ctx, appLog, r.Client, diffConfig, found, func() error {
|
||||
// Copy only the Application/ObjectMeta fields that are significant, from the generatedApp
|
||||
found.Spec = generatedApp.Spec
|
||||
|
||||
if applicationSet.Spec.PreservedFields != nil {
|
||||
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
|
||||
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
|
||||
}
|
||||
|
||||
if len(r.GlobalPreservedAnnotations) > 0 {
|
||||
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
|
||||
}
|
||||
|
||||
if len(r.GlobalPreservedLabels) > 0 {
|
||||
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
|
||||
}
|
||||
|
||||
// Preserve specially treated argo cd annotations:
|
||||
// * https://github.com/argoproj/applicationset/issues/180
|
||||
// * https://github.com/argoproj/argo-cd/issues/10500
|
||||
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
|
||||
|
||||
for _, key := range preservedAnnotations {
|
||||
if state, exists := found.Annotations[key]; exists {
|
||||
if generatedApp.Annotations == nil {
|
||||
generatedApp.Annotations = map[string]string{}
|
||||
}
|
||||
generatedApp.Annotations[key] = state
|
||||
// allow setting the Operation field to trigger a sync operation on an Application
|
||||
if generatedApp.Operation != nil {
|
||||
found.Operation = generatedApp.Operation
|
||||
}
|
||||
}
|
||||
|
||||
for _, key := range preservedLabels {
|
||||
if state, exists := found.Labels[key]; exists {
|
||||
if generatedApp.Labels == nil {
|
||||
generatedApp.Labels = map[string]string{}
|
||||
}
|
||||
generatedApp.Labels[key] = state
|
||||
preservedAnnotations := make([]string, 0)
|
||||
preservedLabels := make([]string, 0)
|
||||
|
||||
if applicationSet.Spec.PreservedFields != nil {
|
||||
preservedAnnotations = append(preservedAnnotations, applicationSet.Spec.PreservedFields.Annotations...)
|
||||
preservedLabels = append(preservedLabels, applicationSet.Spec.PreservedFields.Labels...)
|
||||
}
|
||||
}
|
||||
|
||||
// Preserve deleting finalizers and avoid diff conflicts
|
||||
for _, finalizer := range defaultPreservedFinalizers {
|
||||
for _, f := range found.Finalizers {
|
||||
// For finalizers, use prefix matching in case it contains "/" stages
|
||||
if strings.HasPrefix(f, finalizer) {
|
||||
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
|
||||
if len(r.GlobalPreservedAnnotations) > 0 {
|
||||
preservedAnnotations = append(preservedAnnotations, r.GlobalPreservedAnnotations...)
|
||||
}
|
||||
|
||||
if len(r.GlobalPreservedLabels) > 0 {
|
||||
preservedLabels = append(preservedLabels, r.GlobalPreservedLabels...)
|
||||
}
|
||||
|
||||
// Preserve specially treated argo cd annotations:
|
||||
// * https://github.com/argoproj/applicationset/issues/180
|
||||
// * https://github.com/argoproj/argo-cd/issues/10500
|
||||
preservedAnnotations = append(preservedAnnotations, defaultPreservedAnnotations...)
|
||||
|
||||
for _, key := range preservedAnnotations {
|
||||
if state, exists := found.Annotations[key]; exists {
|
||||
if generatedApp.Annotations == nil {
|
||||
generatedApp.Annotations = map[string]string{}
|
||||
}
|
||||
generatedApp.Annotations[key] = state
|
||||
}
|
||||
}
|
||||
|
||||
for _, key := range preservedLabels {
|
||||
if state, exists := found.Labels[key]; exists {
|
||||
if generatedApp.Labels == nil {
|
||||
generatedApp.Labels = map[string]string{}
|
||||
}
|
||||
generatedApp.Labels[key] = state
|
||||
}
|
||||
}
|
||||
|
||||
// Preserve deleting finalizers and avoid diff conflicts
|
||||
for _, finalizer := range defaultPreservedFinalizers {
|
||||
for _, f := range found.Finalizers {
|
||||
// For finalizers, use prefix matching in case it contains "/" stages
|
||||
if strings.HasPrefix(f, finalizer) {
|
||||
generatedApp.Finalizers = append(generatedApp.Finalizers, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
found.Annotations = generatedApp.Annotations
|
||||
found.Labels = generatedApp.Labels
|
||||
found.Finalizers = generatedApp.Finalizers
|
||||
|
||||
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
|
||||
})
|
||||
if err != nil {
|
||||
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
|
||||
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
|
||||
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
return err
|
||||
}
|
||||
// For backwards compatibility with sequential behavior: continue processing other applications
|
||||
// but record the error keyed by app name so we can deterministically return the error from
|
||||
// the lexicographically first failing app, regardless of goroutine scheduling order.
|
||||
appErrorsMu.Lock()
|
||||
appErrors[generatedApp.Name] = err
|
||||
appErrorsMu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
found.Annotations = generatedApp.Annotations
|
||||
found.Labels = generatedApp.Labels
|
||||
found.Finalizers = generatedApp.Finalizers
|
||||
|
||||
return controllerutil.SetControllerReference(&applicationSet, found, r.Scheme)
|
||||
if action != controllerutil.OperationResultNone {
|
||||
// Don't pollute etcd with "unchanged Application" events
|
||||
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
|
||||
appLog.Logf(log.InfoLevel, "%s Application", action)
|
||||
} else {
|
||||
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
|
||||
// Or enable debug logging
|
||||
appLog.Logf(log.DebugLevel, "%s Application", action)
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
appLog.WithError(err).WithField("action", action).Errorf("failed to %s Application", action)
|
||||
if firstError == nil {
|
||||
firstError = err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
if action != controllerutil.OperationResultNone {
|
||||
// Don't pollute etcd with "unchanged Application" events
|
||||
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, fmt.Sprint(action), "%s Application %q", action, generatedApp.Name)
|
||||
appLog.Logf(log.InfoLevel, "%s Application", action)
|
||||
} else {
|
||||
// "unchanged Application" can be inferred by Reconcile Complete with no action being listed
|
||||
// Or enable debug logging
|
||||
appLog.Logf(log.DebugLevel, "%s Application", action)
|
||||
}
|
||||
}
|
||||
return firstError
|
||||
|
||||
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
return err
|
||||
}
|
||||
return firstAppError(appErrors)
|
||||
}
|
||||
|
||||
// createInCluster will filter from the desiredApplications only the application that needs to be created
|
||||
|
|
@ -849,36 +880,84 @@ func (r *ApplicationSetReconciler) deleteInCluster(ctx context.Context, logCtx *
|
|||
m[app.Name] = true
|
||||
}
|
||||
|
||||
// Delete apps that are not in m[string]bool
|
||||
var firstError error
|
||||
for _, app := range current {
|
||||
logCtx = logCtx.WithFields(applog.GetAppLogFields(&app))
|
||||
_, exists := m[app.Name]
|
||||
g, ctx := errgroup.WithContext(ctx)
|
||||
concurrency := r.concurrency()
|
||||
g.SetLimit(concurrency)
|
||||
|
||||
if !exists {
|
||||
var appErrorsMu sync.Mutex
|
||||
appErrors := map[string]error{}
|
||||
|
||||
// Delete apps that are not in m[string]bool
|
||||
for _, app := range current {
|
||||
_, exists := m[app.Name]
|
||||
if exists {
|
||||
continue
|
||||
}
|
||||
appLogCtx := logCtx.WithFields(applog.GetAppLogFields(&app))
|
||||
g.Go(func() error {
|
||||
// Removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
|
||||
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, logCtx)
|
||||
err := r.removeFinalizerOnInvalidDestination(ctx, applicationSet, &app, clusterList, appLogCtx)
|
||||
if err != nil {
|
||||
logCtx.WithError(err).Error("failed to update Application")
|
||||
if firstError != nil {
|
||||
firstError = err
|
||||
appLogCtx.WithError(err).Error("failed to update Application")
|
||||
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
|
||||
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
// For backwards compatibility with sequential behavior: continue processing other applications
|
||||
// but record the error keyed by app name so we can deterministically return the error from
|
||||
// the lexicographically first failing app, regardless of goroutine scheduling order.
|
||||
appErrorsMu.Lock()
|
||||
appErrors[app.Name] = err
|
||||
appErrorsMu.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
||||
err = r.Delete(ctx, &app)
|
||||
if err != nil {
|
||||
logCtx.WithError(err).Error("failed to delete Application")
|
||||
if firstError != nil {
|
||||
firstError = err
|
||||
appLogCtx.WithError(err).Error("failed to delete Application")
|
||||
// If the context was canceled or its deadline exceeded, return the error so it propagates through g.Wait().
|
||||
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
appErrorsMu.Lock()
|
||||
appErrors[app.Name] = err
|
||||
appErrorsMu.Unlock()
|
||||
return nil
|
||||
}
|
||||
r.Recorder.Eventf(&applicationSet, corev1.EventTypeNormal, "Deleted", "Deleted Application %q", app.Name)
|
||||
logCtx.Log(log.InfoLevel, "Deleted application")
|
||||
}
|
||||
appLogCtx.Log(log.InfoLevel, "Deleted application")
|
||||
return nil
|
||||
})
|
||||
}
|
||||
return firstError
|
||||
|
||||
if err := g.Wait(); errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
|
||||
return err
|
||||
}
|
||||
return firstAppError(appErrors)
|
||||
}
|
||||
|
||||
// concurrency returns the configured number of concurrent application updates, defaulting to 1.
|
||||
func (r *ApplicationSetReconciler) concurrency() int {
|
||||
if r.ConcurrentApplicationUpdates <= 0 {
|
||||
return 1
|
||||
}
|
||||
return r.ConcurrentApplicationUpdates
|
||||
}
|
||||
|
||||
// firstAppError returns the error associated with the lexicographically smallest application name
|
||||
// in the provided map. This gives a deterministic result when multiple goroutines may have
|
||||
// recorded errors concurrently, matching the behavior of the original sequential loop where the
|
||||
// first application in iteration order would determine the returned error.
|
||||
func firstAppError(appErrors map[string]error) error {
|
||||
if len(appErrors) == 0 {
|
||||
return nil
|
||||
}
|
||||
names := make([]string, 0, len(appErrors))
|
||||
for name := range appErrors {
|
||||
names = append(names, name)
|
||||
}
|
||||
sort.Strings(names)
|
||||
return appErrors[names[0]]
|
||||
}
|
||||
|
||||
// removeFinalizerOnInvalidDestination removes the Argo CD resources finalizer if the application contains an invalid target (eg missing cluster)
|
||||
|
|
@ -967,7 +1046,7 @@ func (r *ApplicationSetReconciler) removeOwnerReferencesOnDeleteAppSet(ctx conte
|
|||
func (r *ApplicationSetReconciler) performProgressiveSyncs(ctx context.Context, logCtx *log.Entry, appset argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application) (map[string]bool, error) {
|
||||
appDependencyList, appStepMap := r.buildAppDependencyList(logCtx, appset, desiredApplications)
|
||||
|
||||
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, appStepMap)
|
||||
_, err := r.updateApplicationSetApplicationStatus(ctx, logCtx, &appset, applications, desiredApplications, appStepMap)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to update applicationset app status: %w", err)
|
||||
}
|
||||
|
|
@ -1144,10 +1223,16 @@ func getAppStep(appName string, appStepMap map[string]int) int {
|
|||
}
|
||||
|
||||
// check the status of each Application's status and promote Applications to the next status if needed
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx context.Context, logCtx *log.Entry, applicationSet *argov1alpha1.ApplicationSet, applications []argov1alpha1.Application, desiredApplications []argov1alpha1.Application, appStepMap map[string]int) ([]argov1alpha1.ApplicationSetApplicationStatus, error) {
|
||||
now := metav1.Now()
|
||||
appStatuses := make([]argov1alpha1.ApplicationSetApplicationStatus, 0, len(applications))
|
||||
|
||||
// Build a map of desired applications for quick lookup
|
||||
desiredAppsMap := make(map[string]*argov1alpha1.Application)
|
||||
for i := range desiredApplications {
|
||||
desiredAppsMap[desiredApplications[i].Name] = &desiredApplications[i]
|
||||
}
|
||||
|
||||
for _, app := range applications {
|
||||
appHealthStatus := app.Status.Health.Status
|
||||
appSyncStatus := app.Status.Sync.Status
|
||||
|
|
@ -1182,10 +1267,27 @@ func (r *ApplicationSetReconciler) updateApplicationSetApplicationStatus(ctx con
|
|||
newAppStatus := currentAppStatus.DeepCopy()
|
||||
newAppStatus.Step = strconv.Itoa(getAppStep(newAppStatus.Application, appStepMap))
|
||||
|
||||
if !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions()) {
|
||||
// A new version is available in the application and we need to re-sync the application
|
||||
revisionsChanged := !reflect.DeepEqual(currentAppStatus.TargetRevisions, app.Status.GetRevisions())
|
||||
|
||||
// Check if the desired Application spec differs from the current Application spec
|
||||
specChanged := false
|
||||
if desiredApp, ok := desiredAppsMap[app.Name]; ok {
|
||||
// Compare the desired spec with the current spec to detect non-Git changes
|
||||
// This will catch changes to generator parameters like image tags, helm values, etc.
|
||||
specChanged = !cmp.Equal(desiredApp.Spec, app.Spec, cmpopts.EquateEmpty(), cmpopts.EquateComparable(argov1alpha1.ApplicationDestination{}))
|
||||
}
|
||||
|
||||
if revisionsChanged || specChanged {
|
||||
newAppStatus.TargetRevisions = app.Status.GetRevisions()
|
||||
newAppStatus.Message = "Application has pending changes, setting status to Waiting"
|
||||
|
||||
switch {
|
||||
case revisionsChanged && specChanged:
|
||||
newAppStatus.Message = revisionAndSpecChangedMsg
|
||||
case revisionsChanged:
|
||||
newAppStatus.Message = revisionChangedMsg
|
||||
default:
|
||||
newAppStatus.Message = specChangedMsg
|
||||
}
|
||||
newAppStatus.Status = argov1alpha1.ProgressiveSyncWaiting
|
||||
newAppStatus.LastTransitionTime = &now
|
||||
}
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ import (
|
|||
ctrl "sigs.k8s.io/controller-runtime"
|
||||
crtclient "sigs.k8s.io/controller-runtime/pkg/client"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/fake"
|
||||
"sigs.k8s.io/controller-runtime/pkg/client/interceptor"
|
||||
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
|
||||
"sigs.k8s.io/controller-runtime/pkg/event"
|
||||
|
||||
|
|
@ -1077,6 +1078,70 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that unnormalized live spec does not cause a spurious patch",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSetSpec{
|
||||
Template: v1alpha1.ApplicationSetTemplate{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
existingApps: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
// Without normalizing the live object, the equality check
|
||||
// sees &SyncPolicy{} vs nil and issues an unnecessary patch.
|
||||
SyncPolicy: &v1alpha1.SyncPolicy{},
|
||||
},
|
||||
},
|
||||
},
|
||||
desiredApps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
SyncPolicy: nil,
|
||||
},
|
||||
},
|
||||
},
|
||||
expected: []v1alpha1.Application{
|
||||
{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "2",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Project: "project",
|
||||
SyncPolicy: &v1alpha1.SyncPolicy{},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Ensure that argocd pre-delete and post-delete finalizers are preserved from an existing app",
|
||||
appSet: v1alpha1.ApplicationSet{
|
||||
|
|
@ -1186,6 +1251,374 @@ func TestCreateOrUpdateInCluster(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestCreateOrUpdateInCluster_Concurrent(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
appSet := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
}
|
||||
|
||||
t.Run("all apps are created correctly with concurrency > 1", func(t *testing.T) {
|
||||
desiredApps := make([]v1alpha1.Application, 5)
|
||||
for i := range desiredApps {
|
||||
desiredApps[i] = v1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("app%d", i),
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "project"},
|
||||
}
|
||||
}
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(&appSet).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
ConcurrentApplicationUpdates: 5,
|
||||
}
|
||||
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
|
||||
require.NoError(t, err)
|
||||
|
||||
for _, desired := range desiredApps {
|
||||
got := &v1alpha1.Application{}
|
||||
require.NoError(t, fakeClient.Get(t.Context(), crtclient.ObjectKey{Namespace: desired.Namespace, Name: desired.Name}, got))
|
||||
assert.Equal(t, desired.Spec.Project, got.Spec.Project)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("non-context errors from concurrent goroutines are collected and one is returned", func(t *testing.T) {
|
||||
existingApps := make([]v1alpha1.Application, 5)
|
||||
initObjs := []crtclient.Object{&appSet}
|
||||
for i := range existingApps {
|
||||
existingApps[i] = v1alpha1.Application{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("app%d", i),
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "1",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "old"},
|
||||
}
|
||||
app := existingApps[i].DeepCopy()
|
||||
require.NoError(t, controllerutil.SetControllerReference(&appSet, app, scheme))
|
||||
initObjs = append(initObjs, app)
|
||||
}
|
||||
|
||||
desiredApps := make([]v1alpha1.Application, 5)
|
||||
for i := range desiredApps {
|
||||
desiredApps[i] = v1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: fmt.Sprintf("app%d", i),
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "new"},
|
||||
}
|
||||
}
|
||||
|
||||
patchErr := errors.New("some patch error")
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(initObjs...).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
|
||||
return patchErr
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
ConcurrentApplicationUpdates: 5,
|
||||
}
|
||||
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, desiredApps)
|
||||
require.ErrorIs(t, err, patchErr)
|
||||
})
|
||||
}
|
||||
|
||||
func TestCreateOrUpdateInCluster_ContextCancellation(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
appSet := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
}
|
||||
existingApp := v1alpha1.Application{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "1",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "old"},
|
||||
}
|
||||
desiredApp := v1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "new"},
|
||||
}
|
||||
|
||||
t.Run("context canceled on patch is returned directly", func(t *testing.T) {
|
||||
initObjs := []crtclient.Object{&appSet}
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
initObjs = append(initObjs, app)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(initObjs...).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
|
||||
return context.Canceled
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
}
|
||||
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
})
|
||||
|
||||
t.Run("context deadline exceeded on patch is returned directly", func(t *testing.T) {
|
||||
initObjs := []crtclient.Object{&appSet}
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
initObjs = append(initObjs, app)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(initObjs...).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
|
||||
return context.DeadlineExceeded
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
}
|
||||
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
|
||||
require.ErrorIs(t, err, context.DeadlineExceeded)
|
||||
})
|
||||
|
||||
t.Run("non-context error is collected and returned after all goroutines finish", func(t *testing.T) {
|
||||
initObjs := []crtclient.Object{&appSet}
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
initObjs = append(initObjs, app)
|
||||
|
||||
patchErr := errors.New("some patch error")
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(initObjs...).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Patch: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ crtclient.Patch, _ ...crtclient.PatchOption) error {
|
||||
return patchErr
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
}
|
||||
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{desiredApp})
|
||||
require.ErrorIs(t, err, patchErr)
|
||||
})
|
||||
|
||||
t.Run("context canceled on create is returned directly", func(t *testing.T) {
|
||||
initObjs := []crtclient.Object{&appSet}
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(initObjs...).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Create: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.CreateOption) error {
|
||||
return context.Canceled
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
metrics := appsetmetrics.NewFakeAppsetMetrics()
|
||||
|
||||
r := ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
Metrics: metrics,
|
||||
}
|
||||
|
||||
newApp := v1alpha1.Application{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "newapp", Namespace: "namespace"},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "default"},
|
||||
}
|
||||
err = r.createOrUpdateInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{newApp})
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
})
|
||||
}
|
||||
|
||||
func TestDeleteInCluster_ContextCancellation(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
err = corev1.AddToScheme(scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
appSet := v1alpha1.ApplicationSet{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "name",
|
||||
Namespace: "namespace",
|
||||
},
|
||||
}
|
||||
existingApp := v1alpha1.Application{
|
||||
TypeMeta: metav1.TypeMeta{
|
||||
Kind: application.ApplicationKind,
|
||||
APIVersion: "argoproj.io/v1alpha1",
|
||||
},
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "delete-me",
|
||||
Namespace: "namespace",
|
||||
ResourceVersion: "1",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{Project: "project"},
|
||||
}
|
||||
|
||||
makeReconciler := func(t *testing.T, fakeClient crtclient.Client) ApplicationSetReconciler {
|
||||
t.Helper()
|
||||
kubeclientset := kubefake.NewClientset()
|
||||
clusterInformer, err := settings.NewClusterInformer(kubeclientset, "namespace")
|
||||
require.NoError(t, err)
|
||||
cancel := startAndSyncInformer(t, clusterInformer)
|
||||
t.Cleanup(cancel)
|
||||
return ApplicationSetReconciler{
|
||||
Client: fakeClient,
|
||||
Scheme: scheme,
|
||||
Recorder: record.NewFakeRecorder(10),
|
||||
KubeClientset: kubeclientset,
|
||||
Metrics: appsetmetrics.NewFakeAppsetMetrics(),
|
||||
ClusterInformer: clusterInformer,
|
||||
}
|
||||
}
|
||||
|
||||
t.Run("context canceled on delete is returned directly", func(t *testing.T) {
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(&appSet, app).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
|
||||
return context.Canceled
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
|
||||
r := makeReconciler(t, fakeClient)
|
||||
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
|
||||
require.ErrorIs(t, err, context.Canceled)
|
||||
})
|
||||
|
||||
t.Run("context deadline exceeded on delete is returned directly", func(t *testing.T) {
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(&appSet, app).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
|
||||
return context.DeadlineExceeded
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
|
||||
r := makeReconciler(t, fakeClient)
|
||||
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
|
||||
require.ErrorIs(t, err, context.DeadlineExceeded)
|
||||
})
|
||||
|
||||
t.Run("non-context delete error is collected and returned", func(t *testing.T) {
|
||||
app := existingApp.DeepCopy()
|
||||
err = controllerutil.SetControllerReference(&appSet, app, scheme)
|
||||
require.NoError(t, err)
|
||||
|
||||
deleteErr := errors.New("delete failed")
|
||||
fakeClient := fake.NewClientBuilder().
|
||||
WithScheme(scheme).
|
||||
WithObjects(&appSet, app).
|
||||
WithIndex(&v1alpha1.Application{}, ".metadata.controller", appControllerIndexer).
|
||||
WithInterceptorFuncs(interceptor.Funcs{
|
||||
Delete: func(_ context.Context, _ crtclient.WithWatch, _ crtclient.Object, _ ...crtclient.DeleteOption) error {
|
||||
return deleteErr
|
||||
},
|
||||
}).
|
||||
Build()
|
||||
|
||||
r := makeReconciler(t, fakeClient)
|
||||
err = r.deleteInCluster(t.Context(), log.NewEntry(log.StandardLogger()), appSet, []v1alpha1.Application{})
|
||||
require.ErrorIs(t, err, deleteErr)
|
||||
})
|
||||
}
|
||||
|
||||
func TestRemoveFinalizerOnInvalidDestination_FinalizerTypes(t *testing.T) {
|
||||
scheme := runtime.NewScheme()
|
||||
err := v1alpha1.AddToScheme(scheme)
|
||||
|
|
@ -4799,6 +5232,12 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
newAppWithSpec := func(name string, health health.HealthStatusCode, sync v1alpha1.SyncStatusCode, revision string, opState *v1alpha1.OperationState, spec v1alpha1.ApplicationSpec) v1alpha1.Application {
|
||||
app := newApp(name, health, sync, revision, opState)
|
||||
app.Spec = spec
|
||||
return app
|
||||
}
|
||||
|
||||
newOperationState := func(phase common.OperationPhase) *v1alpha1.OperationState {
|
||||
finishedAt := &metav1.Time{Time: time.Now().Add(-1 * time.Second)}
|
||||
if !phase.Completed() {
|
||||
|
|
@ -4815,6 +5254,7 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
|||
name string
|
||||
appSet v1alpha1.ApplicationSet
|
||||
apps []v1alpha1.Application
|
||||
desiredApps []v1alpha1.Application
|
||||
appStepMap map[string]int
|
||||
expectedAppStatus []v1alpha1.ApplicationSetApplicationStatus
|
||||
}{
|
||||
|
|
@ -4968,14 +5408,14 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
|||
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "Application has pending changes, setting status to Waiting",
|
||||
Message: revisionChangedMsg,
|
||||
Status: v1alpha1.ProgressiveSyncWaiting,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"next"},
|
||||
},
|
||||
{
|
||||
Application: "app2-multisource",
|
||||
Message: "Application has pending changes, setting status to Waiting",
|
||||
Message: revisionChangedMsg,
|
||||
Status: v1alpha1.ProgressiveSyncWaiting,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"next"},
|
||||
|
|
@ -5415,6 +5855,191 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "detects spec changes when image tag changes in generator (same Git revision)",
|
||||
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: v1alpha1.ProgressiveSyncHealthy,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"abc123"},
|
||||
},
|
||||
}),
|
||||
apps: []v1alpha1.Application{
|
||||
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
|
||||
v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v1.0.0"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
}),
|
||||
},
|
||||
desiredApps: []v1alpha1.Application{
|
||||
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "abc123", nil, // Changed to OutOfSync
|
||||
v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v2.0.0"}, // Different value
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
}),
|
||||
},
|
||||
appStepMap: map[string]int{
|
||||
"app1": 0,
|
||||
},
|
||||
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: specChangedMsg,
|
||||
Status: v1alpha1.ProgressiveSyncWaiting,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"abc123"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "does not detect changes when spec is identical (same Git revision)",
|
||||
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: v1alpha1.ProgressiveSyncHealthy,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"abc123"},
|
||||
},
|
||||
}),
|
||||
apps: []v1alpha1.Application{
|
||||
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeSynced, "abc123", nil,
|
||||
v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v1.0.0"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
}),
|
||||
},
|
||||
appStepMap: map[string]int{
|
||||
"app1": 0,
|
||||
},
|
||||
// Desired apps have identical spec
|
||||
desiredApps: []v1alpha1.Application{
|
||||
{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "app1",
|
||||
},
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v1.0.0"}, // Same value
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: v1alpha1.ProgressiveSyncHealthy,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"abc123"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "detects both spec and revision changes",
|
||||
appSet: newDefaultAppSet(2, []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: "",
|
||||
Status: v1alpha1.ProgressiveSyncHealthy,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"abc123"}, // OLD revision in status
|
||||
},
|
||||
}),
|
||||
apps: []v1alpha1.Application{
|
||||
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil, // NEW revision, but OutOfSync
|
||||
v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v1.0.0"},
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
}),
|
||||
},
|
||||
desiredApps: []v1alpha1.Application{
|
||||
newAppWithSpec("app1", health.HealthStatusHealthy, v1alpha1.SyncStatusCodeOutOfSync, "def456", nil,
|
||||
v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://example.com/repo.git",
|
||||
TargetRevision: "master",
|
||||
Helm: &v1alpha1.ApplicationSourceHelm{
|
||||
Parameters: []v1alpha1.HelmParameter{
|
||||
{Name: "image.tag", Value: "v2.0.0"}, // Changed value
|
||||
},
|
||||
},
|
||||
},
|
||||
Destination: v1alpha1.ApplicationDestination{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
Namespace: "default",
|
||||
},
|
||||
}),
|
||||
},
|
||||
appStepMap: map[string]int{
|
||||
"app1": 0,
|
||||
},
|
||||
expectedAppStatus: []v1alpha1.ApplicationSetApplicationStatus{
|
||||
{
|
||||
Application: "app1",
|
||||
Message: revisionAndSpecChangedMsg,
|
||||
Status: v1alpha1.ProgressiveSyncWaiting,
|
||||
Step: "1",
|
||||
TargetRevisions: []string{"def456"},
|
||||
},
|
||||
},
|
||||
},
|
||||
} {
|
||||
t.Run(cc.name, func(t *testing.T) {
|
||||
kubeclientset := kubefake.NewClientset([]runtime.Object{}...)
|
||||
|
|
@ -5434,7 +6059,11 @@ func TestUpdateApplicationSetApplicationStatus(t *testing.T) {
|
|||
Metrics: metrics,
|
||||
}
|
||||
|
||||
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, cc.appStepMap)
|
||||
desiredApps := cc.desiredApps
|
||||
if desiredApps == nil {
|
||||
desiredApps = cc.apps
|
||||
}
|
||||
appStatuses, err := r.updateApplicationSetApplicationStatus(t.Context(), log.NewEntry(log.StandardLogger()), &cc.appSet, cc.apps, desiredApps, cc.appStepMap)
|
||||
|
||||
// opt out of testing the LastTransitionTime is accurate
|
||||
for i := range appStatuses {
|
||||
|
|
@ -7321,6 +7950,40 @@ func TestIsRollingSyncStrategy(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestFirstAppError(t *testing.T) {
|
||||
errA := errors.New("error from app-a")
|
||||
errB := errors.New("error from app-b")
|
||||
errC := errors.New("error from app-c")
|
||||
|
||||
t.Run("returns nil for empty map", func(t *testing.T) {
|
||||
assert.NoError(t, firstAppError(map[string]error{}))
|
||||
})
|
||||
|
||||
t.Run("returns the single error", func(t *testing.T) {
|
||||
assert.ErrorIs(t, firstAppError(map[string]error{"app-a": errA}), errA)
|
||||
})
|
||||
|
||||
t.Run("returns error from lexicographically first app name", func(t *testing.T) {
|
||||
appErrors := map[string]error{
|
||||
"app-c": errC,
|
||||
"app-a": errA,
|
||||
"app-b": errB,
|
||||
}
|
||||
assert.ErrorIs(t, firstAppError(appErrors), errA)
|
||||
})
|
||||
|
||||
t.Run("result is stable across multiple calls with same input", func(t *testing.T) {
|
||||
appErrors := map[string]error{
|
||||
"app-c": errC,
|
||||
"app-a": errA,
|
||||
"app-b": errB,
|
||||
}
|
||||
for range 10 {
|
||||
assert.ErrorIs(t, firstAppError(appErrors), errA, "firstAppError must return the same error on every call")
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
func TestSyncApplication(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
|
|
|
|||
|
|
@ -164,7 +164,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("error fetching Gitlab token: %w", err)
|
||||
}
|
||||
provider, err = scm_provider.NewGitlabProvider(providerConfig.Group, token, providerConfig.API, providerConfig.AllBranches, providerConfig.IncludeSubgroups, providerConfig.WillIncludeSharedProjects(), providerConfig.Insecure, g.scmRootCAPath, providerConfig.Topic, caCerts)
|
||||
provider, err = scm_provider.NewGitlabProvider(providerConfig.Group, token, providerConfig.API, providerConfig.AllBranches, providerConfig.IncludeSubgroups, providerConfig.WillIncludeSharedProjects(), providerConfig.IncludeArchivedRepos, providerConfig.Insecure, g.scmRootCAPath, providerConfig.Topic, caCerts)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error initializing Gitlab service: %w", err)
|
||||
}
|
||||
|
|
@ -173,7 +173,7 @@ func (g *SCMProviderGenerator) GenerateParams(appSetGenerator *argoprojiov1alpha
|
|||
if err != nil {
|
||||
return nil, fmt.Errorf("error fetching Gitea token: %w", err)
|
||||
}
|
||||
provider, err = scm_provider.NewGiteaProvider(providerConfig.Gitea.Owner, token, providerConfig.Gitea.API, providerConfig.Gitea.AllBranches, providerConfig.Gitea.Insecure)
|
||||
provider, err = scm_provider.NewGiteaProvider(providerConfig.Gitea.Owner, token, providerConfig.Gitea.API, providerConfig.Gitea.AllBranches, providerConfig.Gitea.Insecure, providerConfig.Gitea.ExcludeArchivedRepos)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error initializing Gitea service: %w", err)
|
||||
}
|
||||
|
|
@ -289,9 +289,9 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
|
|||
}
|
||||
|
||||
if g.enableGitHubAPIMetrics {
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, httpClient)
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, github.ExcludeArchivedRepos, httpClient)
|
||||
}
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches)
|
||||
return scm_provider.NewGithubAppProviderFor(ctx, *auth, github.Organization, github.API, github.AllBranches, github.ExcludeArchivedRepos)
|
||||
}
|
||||
|
||||
token, err := utils.GetSecretRef(ctx, g.client, github.TokenRef, applicationSetInfo.Namespace, g.tokenRefStrictMode)
|
||||
|
|
@ -300,7 +300,7 @@ func (g *SCMProviderGenerator) githubProvider(ctx context.Context, github *argop
|
|||
}
|
||||
|
||||
if g.enableGitHubAPIMetrics {
|
||||
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, httpClient)
|
||||
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, github.ExcludeArchivedRepos, httpClient)
|
||||
}
|
||||
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches)
|
||||
return scm_provider.NewGithubProvider(github.Organization, token, github.API, github.AllBranches, github.ExcludeArchivedRepos)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -12,14 +12,15 @@ import (
|
|||
)
|
||||
|
||||
type GiteaProvider struct {
|
||||
client *gitea.Client
|
||||
owner string
|
||||
allBranches bool
|
||||
client *gitea.Client
|
||||
owner string
|
||||
allBranches bool
|
||||
excludeArchivedRepos bool
|
||||
}
|
||||
|
||||
var _ SCMProviderService = &GiteaProvider{}
|
||||
|
||||
func NewGiteaProvider(owner, token, url string, allBranches, insecure bool) (*GiteaProvider, error) {
|
||||
func NewGiteaProvider(owner, token, url string, allBranches, insecure, excludeArchivedRepos bool) (*GiteaProvider, error) {
|
||||
if token == "" {
|
||||
token = os.Getenv("GITEA_TOKEN")
|
||||
}
|
||||
|
|
@ -40,9 +41,10 @@ func NewGiteaProvider(owner, token, url string, allBranches, insecure bool) (*Gi
|
|||
return nil, fmt.Errorf("error creating a new gitea client: %w", err)
|
||||
}
|
||||
return &GiteaProvider{
|
||||
client: client,
|
||||
owner: owner,
|
||||
allBranches: allBranches,
|
||||
client: client,
|
||||
owner: owner,
|
||||
allBranches: allBranches,
|
||||
excludeArchivedRepos: excludeArchivedRepos,
|
||||
}, nil
|
||||
}
|
||||
|
||||
|
|
@ -114,6 +116,11 @@ func (g *GiteaProvider) ListRepos(_ context.Context, cloneProtocol string) ([]*R
|
|||
for _, label := range giteaLabels {
|
||||
labels = append(labels, label.Name)
|
||||
}
|
||||
|
||||
if g.excludeArchivedRepos && repo.Archived {
|
||||
continue
|
||||
}
|
||||
|
||||
repos = append(repos, &Repository{
|
||||
Organization: g.owner,
|
||||
Repository: repo.Name,
|
||||
|
|
|
|||
|
|
@ -100,17 +100,96 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
"mirror_interval": "",
|
||||
"mirror_updated": "0001-01-01T00:00:00Z",
|
||||
"repo_transfer": null
|
||||
}]`)
|
||||
},
|
||||
{
|
||||
"id": 21619,
|
||||
"owner": {
|
||||
"id": 31480,
|
||||
"login": "test-argocd",
|
||||
"full_name": "",
|
||||
"email": "",
|
||||
"avatar_url": "https://gitea.com/avatars/22d1b1d3f61abf95951c4a958731d848",
|
||||
"language": "",
|
||||
"is_admin": false,
|
||||
"last_login": "0001-01-01T00:00:00Z",
|
||||
"created": "2022-04-06T02:28:06+08:00",
|
||||
"restricted": false,
|
||||
"active": false,
|
||||
"prohibit_login": false,
|
||||
"location": "",
|
||||
"website": "",
|
||||
"description": "",
|
||||
"visibility": "public",
|
||||
"followers_count": 0,
|
||||
"following_count": 0,
|
||||
"starred_repos_count": 0,
|
||||
"username": "test-argocd"
|
||||
},
|
||||
"name": "another-repo",
|
||||
"full_name": "test-argocd/another-repo",
|
||||
"description": "",
|
||||
"empty": false,
|
||||
"private": false,
|
||||
"fork": false,
|
||||
"template": false,
|
||||
"parent": null,
|
||||
"mirror": false,
|
||||
"size": 28,
|
||||
"language": "",
|
||||
"languages_url": "https://gitea.com/api/v1/repos/test-argocd/another-repo/languages",
|
||||
"html_url": "https://gitea.com/test-argocd/another-repo",
|
||||
"ssh_url": "git@gitea.com:test-argocd/another-repo.git",
|
||||
"clone_url": "https://gitea.com/test-argocd/another-repo.git",
|
||||
"original_url": "",
|
||||
"website": "",
|
||||
"stars_count": 0,
|
||||
"forks_count": 0,
|
||||
"watchers_count": 1,
|
||||
"open_issues_count": 0,
|
||||
"open_pr_counter": 1,
|
||||
"release_counter": 0,
|
||||
"default_branch": "main",
|
||||
"archived": true,
|
||||
"created_at": "2022-04-06T02:32:09+08:00",
|
||||
"updated_at": "2022-04-06T02:33:12+08:00",
|
||||
"permissions": {
|
||||
"admin": false,
|
||||
"push": false,
|
||||
"pull": true
|
||||
},
|
||||
"has_issues": true,
|
||||
"internal_tracker": {
|
||||
"enable_time_tracker": true,
|
||||
"allow_only_contributors_to_track_time": true,
|
||||
"enable_issue_dependencies": true
|
||||
},
|
||||
"has_wiki": true,
|
||||
"has_pull_requests": true,
|
||||
"has_projects": true,
|
||||
"ignore_whitespace_conflicts": false,
|
||||
"allow_merge_commits": true,
|
||||
"allow_rebase": true,
|
||||
"allow_rebase_explicit": true,
|
||||
"allow_squash_merge": true,
|
||||
"default_merge_style": "merge",
|
||||
"avatar_url": "",
|
||||
"internal": false,
|
||||
"mirror_interval": "",
|
||||
"mirror_updated": "0001-01-01T00:00:00Z",
|
||||
"repo_transfer": null
|
||||
}
|
||||
|
||||
]`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/pr-test/branches/main":
|
||||
case "/api/v1/repos/test-argocd/another-repo/branches/main":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"id": "72687815ccba81ef014a96201cc2e846a68789d8",
|
||||
"id": "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/72687815ccba81ef014a96201cc2e846a68789d8",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
|
|
@ -144,13 +223,209 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/pr-test/branches/test":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"id": "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"committer": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"verification": {
|
||||
"verified": false,
|
||||
"reason": "gpg.error.no_gpg_keys_found",
|
||||
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
|
||||
"signer": null,
|
||||
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
|
||||
},
|
||||
"timestamp": "2022-04-05T14:29:51-04:00",
|
||||
"added": null,
|
||||
"removed": null,
|
||||
"modified": null
|
||||
},
|
||||
"protected": false,
|
||||
"required_approvals": 0,
|
||||
"enable_status_check": false,
|
||||
"status_check_contexts": [],
|
||||
"user_can_push": false,
|
||||
"user_can_merge": false,
|
||||
"effective_branch_protection_name": ""
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/another-repo/branches/test":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"id": "32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/another-repo/commit/32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"committer": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"verification": {
|
||||
"verified": false,
|
||||
"reason": "gpg.error.no_gpg_keys_found",
|
||||
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
|
||||
"signer": null,
|
||||
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
|
||||
},
|
||||
"timestamp": "2022-04-05T14:29:51-04:00",
|
||||
"added": null,
|
||||
"removed": null,
|
||||
"modified": null
|
||||
},
|
||||
"protected": false,
|
||||
"required_approvals": 0,
|
||||
"enable_status_check": false,
|
||||
"status_check_contexts": [],
|
||||
"user_can_push": false,
|
||||
"user_can_merge": false,
|
||||
"effective_branch_protection_name": ""
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/pr-test/branches/main":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"id": "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"committer": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"verification": {
|
||||
"verified": false,
|
||||
"reason": "gpg.error.no_gpg_keys_found",
|
||||
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
|
||||
"signer": null,
|
||||
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
|
||||
},
|
||||
"timestamp": "2022-04-05T14:29:51-04:00",
|
||||
"added": null,
|
||||
"removed": null,
|
||||
"modified": null
|
||||
},
|
||||
"protected": false,
|
||||
"required_approvals": 0,
|
||||
"enable_status_check": false,
|
||||
"status_check_contexts": [],
|
||||
"user_can_push": false,
|
||||
"user_can_merge": false,
|
||||
"effective_branch_protection_name": ""
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/another-repo/branches?limit=0&page=1":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"id": "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"committer": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"verification": {
|
||||
"verified": false,
|
||||
"reason": "gpg.error.no_gpg_keys_found",
|
||||
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiqUACgkQlgCr7m50\nzBPSmQgAiVVEIxC42tuks4iGFNURrtYvypZAEIc+hJgt2kBpmdCrAphYPeAj+Wtr\n9KT7dDscCZIba2wx39HEXO2S7wNCXESvAzrA8rdfbXjR4L2miZ1urfBkEoqK5i/F\noblWGuAyjurX4KPa2ARROd0H4AXxt6gNAXaFPgZO+xXCyNKZfad/lkEP1AiPRknD\nvTTMbEkIzFHK9iVwZ9DORGpfF1wnLzxWmMfhYatZnBgFNnoeJNtFhCJo05rHBgqc\nqVZWXt1iF7nysBoXSzyx1ZAsmBr/Qerkuj0nonh0aPVa6NKJsdmeJyPX4zXXoi6E\ne/jpxX2UQJkpFezg3IjUpvE5FvIiYg==\n=3Af2\n-----END PGP SIGNATURE-----\n",
|
||||
"signer": null,
|
||||
"payload": "tree 64d47c7fc6e31dcf00654223ec4ab749dd0a464e\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183391 -0400\n\ninitial commit\n"
|
||||
},
|
||||
"timestamp": "2022-04-05T14:29:51-04:00",
|
||||
"added": null,
|
||||
"removed": null,
|
||||
"modified": null
|
||||
},
|
||||
"protected": false,
|
||||
"required_approvals": 0,
|
||||
"enable_status_check": false,
|
||||
"status_check_contexts": [],
|
||||
"user_can_push": false,
|
||||
"user_can_merge": false,
|
||||
"effective_branch_protection_name": ""
|
||||
},
|
||||
{
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"id": "32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
"message": "add an empty file\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"committer": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
"username": "graytshirt"
|
||||
},
|
||||
"verification": {
|
||||
"verified": false,
|
||||
"reason": "gpg.error.no_gpg_keys_found",
|
||||
"signature": "-----BEGIN PGP SIGNATURE-----\n\niQEzBAABCAAdFiEEXYAkwEBRpXzXgHFWlgCr7m50zBMFAmJMiugACgkQlgCr7m50\nzBN+7wgAkCHD3KfX3Ffkqv2qPwqgHNYM1bA6Hmffzhv0YeD9jWCI3tp0JulP4iFZ\ncQ7jqx9xP9tCQMSFCaijLRHaE6Js1xrVtf0OKRkbpdlvkyrIM3sQhqyQgAsISrDG\nLzSqeoQQjglzeWESYh2Tjn1CgqQNKjI6LLepSwvF1pIeV4pJpJobaEbIfTgStdzM\nWEk8o0I+EZaYqK0C0vU9N0LK/LR/jnlaHsb4OUjvk+S7lRjZwBkrsg7P/QsqtCVd\nw5nkxDiCx1J58zKMnQ7ZinJEK9A5WYdnMYc6aBn7ARgZrblXPPBkkKUhEv3ZSPeW\nKv9i4GQy838xkVSTFkHNj1+a5o6zEA==\n=JiFw\n-----END PGP SIGNATURE-----\n",
|
||||
"signer": null,
|
||||
"payload": "tree cdddf3e1d6a8a7e6899a044d0e1bc73bf798e2f5\nparent 72687815ccba81ef014a96201cc2e846a68789d8\nauthor Dan Molik \u003cdan@danmolik.com\u003e 1649183458 -0400\ncommitter Dan Molik \u003cdan@danmolik.com\u003e 1649183458 -0400\n\nadd an empty file\n"
|
||||
},
|
||||
"timestamp": "2022-04-05T14:30:58-04:00",
|
||||
"added": null,
|
||||
"removed": null,
|
||||
"modified": null
|
||||
},
|
||||
"protected": false,
|
||||
"required_approvals": 0,
|
||||
"enable_status_check": false,
|
||||
"status_check_contexts": [],
|
||||
"user_can_push": false,
|
||||
"user_can_merge": false,
|
||||
"effective_branch_protection_name": ""
|
||||
}]`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v1/repos/test-argocd/pr-test/branches?limit=0&page=1":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"id": "72687815ccba81ef014a96201cc2e846a68789d8",
|
||||
"id": "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
"message": "initial commit\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/72687815ccba81ef014a96201cc2e846a68789d8",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
|
|
@ -183,9 +458,9 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
}, {
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"id": "7bbaf62d92ddfafd9cc8b340c619abaec32bc09f",
|
||||
"id": "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
"message": "add an empty file\n",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/7bbaf62d92ddfafd9cc8b340c619abaec32bc09f",
|
||||
"url": "https://gitea.com/test-argocd/pr-test/commit/28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
"author": {
|
||||
"name": "Dan Molik",
|
||||
"email": "dan@danmolik.com",
|
||||
|
|
@ -261,40 +536,270 @@ func giteaMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
|
||||
func TestGiteaListRepos(t *testing.T) {
|
||||
cases := []struct {
|
||||
name, proto, url string
|
||||
name, proto string
|
||||
hasError, allBranches, includeSubgroups bool
|
||||
excludeArchivedRepos bool
|
||||
branches []string
|
||||
expectedRepos []*Repository
|
||||
filters []v1alpha1.SCMProviderGeneratorFilter
|
||||
}{
|
||||
{
|
||||
name: "blank protocol",
|
||||
allBranches: false,
|
||||
url: "git@gitea.com:test-argocd/pr-test.git",
|
||||
branches: []string{"main"},
|
||||
name: "blank protocol",
|
||||
allBranches: false,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"main"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
RepositoryId: 21618,
|
||||
Labels: []string{},
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
RepositoryId: 21619,
|
||||
Labels: []string{},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ssh protocol",
|
||||
allBranches: false,
|
||||
proto: "ssh",
|
||||
url: "git@gitea.com:test-argocd/pr-test.git",
|
||||
name: "ssh protocol",
|
||||
allBranches: false,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
proto: "ssh",
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
RepositoryId: 21618,
|
||||
Labels: []string{},
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
RepositoryId: 21619,
|
||||
Labels: []string{},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "https protocol",
|
||||
allBranches: false,
|
||||
proto: "https",
|
||||
url: "https://gitea.com/test-argocd/pr-test",
|
||||
name: "https protocol",
|
||||
allBranches: false,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
proto: "https",
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "https://gitea.com/test-argocd/pr-test",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
RepositoryId: 21618,
|
||||
Labels: []string{},
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "https://gitea.com/test-argocd/another-repo",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
RepositoryId: 21619,
|
||||
Labels: []string{},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "other protocol",
|
||||
allBranches: false,
|
||||
proto: "other",
|
||||
hasError: true,
|
||||
name: "other protocol",
|
||||
allBranches: false,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
proto: "other",
|
||||
hasError: true,
|
||||
expectedRepos: []*Repository{},
|
||||
},
|
||||
{
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
url: "git@gitea.com:test-argocd/pr-test.git",
|
||||
branches: []string{"main"},
|
||||
name: "all branches including archived repos",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"main"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "1fa33898cf84e89836863e3a5e76eee45777b4b0",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "another-repo",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/another-repo.git",
|
||||
SHA: "32cdcf613b259a9439ceabd4d1745d43f163ea70",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21619,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all branches with no archived repos",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: true,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"main"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "main",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "75f6fceff80f6aaf12b65a2cf6a89190b866625b",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
{
|
||||
Organization: "test-argocd",
|
||||
Repository: "pr-test",
|
||||
Branch: "test",
|
||||
URL: "git@gitea.com:test-argocd/pr-test.git",
|
||||
SHA: "28c3b329933f6fefd9b55225535123bbffec5a46",
|
||||
Labels: []string{},
|
||||
RepositoryId: 21618,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
|
|
@ -303,26 +808,19 @@ func TestGiteaListRepos(t *testing.T) {
|
|||
defer ts.Close()
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
provider, _ := NewGiteaProvider("test-argocd", "", ts.URL, c.allBranches, false)
|
||||
provider, _ := NewGiteaProvider("test-argocd", "", ts.URL, c.allBranches, false, c.excludeArchivedRepos)
|
||||
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
|
||||
|
||||
if c.hasError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
// Just check that this one project shows up. Not a great test but better thing nothing?
|
||||
repos := []*Repository{}
|
||||
branches := []string{}
|
||||
for _, r := range rawRepos {
|
||||
if r.Repository == "pr-test" {
|
||||
repos = append(repos, r)
|
||||
branches = append(branches, r.Branch)
|
||||
}
|
||||
}
|
||||
repos = append(rawRepos, repos...)
|
||||
|
||||
assert.NotEmpty(t, repos)
|
||||
assert.Equal(t, c.url, repos[0].URL)
|
||||
for _, b := range c.branches {
|
||||
assert.Contains(t, branches, b)
|
||||
}
|
||||
assert.Len(t, repos, len(c.expectedRepos))
|
||||
assert.ElementsMatch(t, c.expectedRepos, repos)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
@ -333,7 +831,7 @@ func TestGiteaHasPath(t *testing.T) {
|
|||
giteaMockHandler(t)(w, r)
|
||||
}))
|
||||
defer ts.Close()
|
||||
host, _ := NewGiteaProvider("gitea", "", ts.URL, false, false)
|
||||
host, _ := NewGiteaProvider("gitea", "", ts.URL, false, false, false)
|
||||
repo := &Repository{
|
||||
Organization: "gitea",
|
||||
Repository: "go-sdk",
|
||||
|
|
|
|||
|
|
@ -12,14 +12,15 @@ import (
|
|||
)
|
||||
|
||||
type GithubProvider struct {
|
||||
client *github.Client
|
||||
organization string
|
||||
allBranches bool
|
||||
client *github.Client
|
||||
organization string
|
||||
allBranches bool
|
||||
excludeArchivedRepos bool
|
||||
}
|
||||
|
||||
var _ SCMProviderService = &GithubProvider{}
|
||||
|
||||
func NewGithubProvider(organization string, token string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
func NewGithubProvider(organization string, token string, url string, allBranches bool, excludeArchivedRepos bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
// Undocumented environment variable to set a default token, to be used in testing to dodge anonymous rate limits.
|
||||
if token == "" {
|
||||
token = os.Getenv("GITHUB_TOKEN")
|
||||
|
|
@ -45,7 +46,7 @@ func NewGithubProvider(organization string, token string, url string, allBranche
|
|||
return nil, err
|
||||
}
|
||||
}
|
||||
return &GithubProvider{client: client, organization: organization, allBranches: allBranches}, nil
|
||||
return &GithubProvider{client: client, organization: organization, allBranches: allBranches, excludeArchivedRepos: excludeArchivedRepos}, nil
|
||||
}
|
||||
|
||||
func (g *GithubProvider) GetBranches(ctx context.Context, repo *Repository) ([]*Repository, error) {
|
||||
|
|
@ -90,6 +91,11 @@ func (g *GithubProvider) ListRepos(ctx context.Context, cloneProtocol string) ([
|
|||
default:
|
||||
return nil, fmt.Errorf("unknown clone protocol for GitHub %v", cloneProtocol)
|
||||
}
|
||||
|
||||
if g.excludeArchivedRepos && githubRepo.GetArchived() {
|
||||
continue
|
||||
}
|
||||
|
||||
repos = append(repos, &Repository{
|
||||
Organization: githubRepo.Owner.GetLogin(),
|
||||
Repository: githubRepo.GetName(),
|
||||
|
|
|
|||
|
|
@ -9,11 +9,11 @@ import (
|
|||
appsetutils "github.com/argoproj/argo-cd/v3/applicationset/utils"
|
||||
)
|
||||
|
||||
func NewGithubAppProviderFor(ctx context.Context, g github_app_auth.Authentication, organization string, url string, allBranches bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
func NewGithubAppProviderFor(ctx context.Context, g github_app_auth.Authentication, organization string, url string, allBranches bool, excludeArchivedRepos bool, optionalHTTPClient ...*http.Client) (*GithubProvider, error) {
|
||||
httpClient := appsetutils.GetOptionalHTTPClient(optionalHTTPClient...)
|
||||
client, err := github_app.Client(ctx, g, url, organization, httpClient)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &GithubProvider{client: client, organization: organization, allBranches: allBranches}, nil
|
||||
return &GithubProvider{client: client, organization: organization, allBranches: allBranches, excludeArchivedRepos: excludeArchivedRepos}, nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -122,6 +122,110 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
"pull": true
|
||||
},
|
||||
"template_repository": null
|
||||
},
|
||||
{
|
||||
"id": 1296270,
|
||||
"node_id": "MDEwOlJlcGsddRvcnkxMjk2MjY5",
|
||||
"name": "another-repo",
|
||||
"full_name": "argoproj/another-repo",
|
||||
"owner": {
|
||||
"login": "argoproj",
|
||||
"id": 1,
|
||||
"node_id": "MDQ6VXNlcjE=",
|
||||
"avatar_url": "https://github.com/images/error/argoproj_happy.gif",
|
||||
"gravatar_id": "",
|
||||
"url": "https://api.github.com/users/argoproj",
|
||||
"html_url": "https://github.com/argoproj",
|
||||
"followers_url": "https://api.github.com/users/argoproj/followers",
|
||||
"following_url": "https://api.github.com/users/argoproj/following{/other_user}",
|
||||
"gists_url": "https://api.github.com/users/argoproj/gists{/gist_id}",
|
||||
"starred_url": "https://api.github.com/users/argoproj/starred{/owner}{/repo}",
|
||||
"subscriptions_url": "https://api.github.com/users/argoproj/subscriptions",
|
||||
"organizations_url": "https://api.github.com/users/argoproj/orgs",
|
||||
"repos_url": "https://api.github.com/users/argoproj/repos",
|
||||
"events_url": "https://api.github.com/users/argoproj/events{/privacy}",
|
||||
"received_events_url": "https://api.github.com/users/argoproj/received_events",
|
||||
"type": "User",
|
||||
"site_admin": false
|
||||
},
|
||||
"private": false,
|
||||
"html_url": "https://github.com/argoproj/another-repo",
|
||||
"description": "This your first repo!",
|
||||
"fork": false,
|
||||
"url": "https://api.github.com/repos/argoproj/another-repo",
|
||||
"archive_url": "https://api.github.com/repos/argoproj/another-repo/{archive_format}{/ref}",
|
||||
"assignees_url": "https://api.github.com/repos/argoproj/another-repo/assignees{/user}",
|
||||
"blobs_url": "https://api.github.com/repos/argoproj/another-repo/git/blobs{/sha}",
|
||||
"branches_url": "https://api.github.com/repos/argoproj/another-repo/branches{/branch}",
|
||||
"collaborators_url": "https://api.github.com/repos/argoproj/another-repo/collaborators{/collaborator}",
|
||||
"comments_url": "https://api.github.com/repos/argoproj/another-repo/comments{/number}",
|
||||
"commits_url": "https://api.github.com/repos/argoproj/another-repo/commits{/sha}",
|
||||
"compare_url": "https://api.github.com/repos/argoproj/another-repo/compare/{base}...{head}",
|
||||
"contents_url": "https://api.github.com/repos/argoproj/another-repo/contents/{path}",
|
||||
"contributors_url": "https://api.github.com/repos/argoproj/another-repo/contributors",
|
||||
"deployments_url": "https://api.github.com/repos/argoproj/another-repo/deployments",
|
||||
"downloads_url": "https://api.github.com/repos/argoproj/another-repo/downloads",
|
||||
"events_url": "https://api.github.com/repos/argoproj/another-repo/events",
|
||||
"forks_url": "https://api.github.com/repos/argoproj/another-repo/forks",
|
||||
"git_commits_url": "https://api.github.com/repos/argoproj/another-repo/git/commits{/sha}",
|
||||
"git_refs_url": "https://api.github.com/repos/argoproj/another-repo/git/refs{/sha}",
|
||||
"git_tags_url": "https://api.github.com/repos/argoproj/another-repo/git/tags{/sha}",
|
||||
"git_url": "git:github.com/argoproj/another-repo.git",
|
||||
"issue_comment_url": "https://api.github.com/repos/argoproj/another-repo/issues/comments{/number}",
|
||||
"issue_events_url": "https://api.github.com/repos/argoproj/another-repo/issues/events{/number}",
|
||||
"issues_url": "https://api.github.com/repos/argoproj/another-repo/issues{/number}",
|
||||
"keys_url": "https://api.github.com/repos/argoproj/another-repo/keys{/key_id}",
|
||||
"labels_url": "https://api.github.com/repos/argoproj/another-repo/labels{/name}",
|
||||
"languages_url": "https://api.github.com/repos/argoproj/another-repo/languages",
|
||||
"merges_url": "https://api.github.com/repos/argoproj/another-repo/merges",
|
||||
"milestones_url": "https://api.github.com/repos/argoproj/another-repo/milestones{/number}",
|
||||
"notifications_url": "https://api.github.com/repos/argoproj/another-repo/notifications{?since,all,participating}",
|
||||
"pulls_url": "https://api.github.com/repos/argoproj/another-repo/pulls{/number}",
|
||||
"releases_url": "https://api.github.com/repos/argoproj/another-repo/releases{/id}",
|
||||
"ssh_url": "git@github.com:argoproj/another-repo.git",
|
||||
"stargazers_url": "https://api.github.com/repos/argoproj/another-repo/stargazers",
|
||||
"statuses_url": "https://api.github.com/repos/argoproj/another-repo/statuses/{sha}",
|
||||
"subscribers_url": "https://api.github.com/repos/argoproj/another-repo/subscribers",
|
||||
"subscription_url": "https://api.github.com/repos/argoproj/another-repo/subscription",
|
||||
"tags_url": "https://api.github.com/repos/argoproj/another-repo/tags",
|
||||
"teams_url": "https://api.github.com/repos/argoproj/another-repo/teams",
|
||||
"trees_url": "https://api.github.com/repos/argoproj/another-repo/git/trees{/sha}",
|
||||
"clone_url": "https://github.com/argoproj/another-repo.git",
|
||||
"mirror_url": "git:git.example.com/argoproj/another-repo",
|
||||
"hooks_url": "https://api.github.com/repos/argoproj/another-repo/hooks",
|
||||
"svn_url": "https://svn.github.com/argoproj/another-repo",
|
||||
"homepage": "https://github.com",
|
||||
"language": null,
|
||||
"forks_count": 9,
|
||||
"stargazers_count": 80,
|
||||
"watchers_count": 80,
|
||||
"size": 108,
|
||||
"default_branch": "master",
|
||||
"open_issues_count": 0,
|
||||
"is_template": false,
|
||||
"topics": [
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api"
|
||||
],
|
||||
"has_issues": true,
|
||||
"has_projects": true,
|
||||
"has_wiki": true,
|
||||
"has_pages": false,
|
||||
"has_downloads": true,
|
||||
"archived": true,
|
||||
"disabled": false,
|
||||
"visibility": "public",
|
||||
"pushed_at": "2011-01-26T19:06:43Z",
|
||||
"created_at": "2011-01-26T19:01:12Z",
|
||||
"updated_at": "2011-01-26T19:14:43Z",
|
||||
"permissions": {
|
||||
"admin": false,
|
||||
"push": false,
|
||||
"pull": true
|
||||
},
|
||||
"template_repository": null
|
||||
}
|
||||
]`)
|
||||
if err != nil {
|
||||
|
|
@ -146,12 +250,55 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
}
|
||||
},
|
||||
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
|
||||
},
|
||||
{
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"sha": "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
"url": "https://api.github.com/repos/argoproj/argo-cd/commits/80a6e93f16e8093e24091b03c614362df3fb9b92"
|
||||
},
|
||||
"protected": true,
|
||||
"protection": {
|
||||
"required_status_checks": {
|
||||
"enforcement_level": "non_admins",
|
||||
"contexts": [
|
||||
"ci-test",
|
||||
"linter"
|
||||
]
|
||||
}
|
||||
},
|
||||
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
|
||||
}
|
||||
]
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v3/repos/argoproj/another-repo/branches?per_page=100":
|
||||
_, err := io.WriteString(w, `[
|
||||
{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"sha": "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
"url": "https://api.github.com/repos/argoproj/another-repo/commits/19b016818bc0e0a44ddeaab345838a2a6c97fa67"
|
||||
},
|
||||
"protected": true,
|
||||
"protection": {
|
||||
"required_status_checks": {
|
||||
"enforcement_level": "non_admins",
|
||||
"contexts": [
|
||||
"ci-test",
|
||||
"linter"
|
||||
]
|
||||
}
|
||||
},
|
||||
"protection_url": "https://api.github.com/repos/argoproj/hello-world/branches/master/protection"
|
||||
}
|
||||
]
|
||||
`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v3/repos/argoproj/argo-cd/contents/pkg?ref=master":
|
||||
_, err := io.WriteString(w, `{
|
||||
"type": "file",
|
||||
|
|
@ -196,6 +343,50 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v3/repos/argoproj/argo-cd/branches/test":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "test",
|
||||
"commit": {
|
||||
"sha": "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
"url": "https://api.github.com/repos/octocat/Hello-World/commits/80a6e93f16e8093e24091b03c614362df3fb9b92"
|
||||
},
|
||||
"protected": true,
|
||||
"protection": {
|
||||
"required_status_checks": {
|
||||
"enforcement_level": "non_admins",
|
||||
"contexts": [
|
||||
"ci-test",
|
||||
"linter"
|
||||
]
|
||||
}
|
||||
},
|
||||
"protection_url": "https://api.github.com/repos/octocat/hello-world/branches/test/protection"
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v3/repos/argoproj/another-repo/branches/main":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "main",
|
||||
"commit": {
|
||||
"sha": "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
"url": "https://api.github.com/repos/octocat/Hello-World/commits/c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc"
|
||||
},
|
||||
"protected": true,
|
||||
"protection": {
|
||||
"required_status_checks": {
|
||||
"enforcement_level": "non_admins",
|
||||
"contexts": [
|
||||
"ci-test",
|
||||
"linter"
|
||||
]
|
||||
}
|
||||
},
|
||||
"protection_url": "https://api.github.com/repos/octocat/hello-world/branches/master/protection"
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
default:
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
}
|
||||
|
|
@ -203,37 +394,276 @@ func githubMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
}
|
||||
|
||||
func TestGithubListRepos(t *testing.T) {
|
||||
idptr := func(i int64) *int64 {
|
||||
return &i
|
||||
}
|
||||
// Test cases for ListRepos
|
||||
cases := []struct {
|
||||
name, proto, url string
|
||||
name, proto string
|
||||
hasError, allBranches bool
|
||||
branches []string
|
||||
excludeArchivedRepos bool
|
||||
expectedRepos []*Repository
|
||||
filters []v1alpha1.SCMProviderGeneratorFilter
|
||||
}{
|
||||
{
|
||||
name: "blank protocol",
|
||||
url: "git@github.com:argoproj/argo-cd.git",
|
||||
branches: []string{"master"},
|
||||
name: "blank protocol",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "master",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "test",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@github.com:argoproj/another-repo.git",
|
||||
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296270),
|
||||
},
|
||||
},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ssh protocol",
|
||||
proto: "ssh",
|
||||
url: "git@github.com:argoproj/argo-cd.git",
|
||||
name: "ssh protocol",
|
||||
proto: "ssh",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "master",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "test",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@github.com:argoproj/another-repo.git",
|
||||
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296270),
|
||||
},
|
||||
},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "https protocol",
|
||||
proto: "https",
|
||||
url: "https://github.com/argoproj/argo-cd.git",
|
||||
name: "https protocol",
|
||||
proto: "https",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: false,
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "master",
|
||||
URL: "https://github.com/argoproj/argo-cd.git",
|
||||
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "test",
|
||||
URL: "https://github.com/argoproj/argo-cd.git",
|
||||
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "https://github.com/argoproj/another-repo.git",
|
||||
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296270),
|
||||
},
|
||||
},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "other protocol",
|
||||
proto: "other",
|
||||
hasError: true,
|
||||
name: "other protocol",
|
||||
proto: "other",
|
||||
hasError: true,
|
||||
excludeArchivedRepos: false,
|
||||
expectedRepos: []*Repository{},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
url: "git@github.com:argoproj/argo-cd.git",
|
||||
branches: []string{"master"},
|
||||
name: "all branches with archived repos",
|
||||
allBranches: true,
|
||||
proto: "ssh",
|
||||
excludeArchivedRepos: false,
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "master",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "test",
|
||||
URL: "git@github.com:argoproj/argo-cd.git",
|
||||
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "another-repo",
|
||||
Branch: "main",
|
||||
URL: "git@github.com:argoproj/another-repo.git",
|
||||
SHA: "19b016818bc0e0a44ddeaab345838a2a6c97fa67",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296270),
|
||||
},
|
||||
},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "test repo all branches without archived repos",
|
||||
allBranches: true,
|
||||
excludeArchivedRepos: true,
|
||||
proto: "https",
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "master",
|
||||
URL: "https://github.com/argoproj/argo-cd.git",
|
||||
SHA: "c5b97d5ae6c19d5c5df71a34c7fbeeda2479ccbc",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
Branch: "test",
|
||||
URL: "https://github.com/argoproj/argo-cd.git",
|
||||
SHA: "80a6e93f16e8093e24091b03c614362df3fb9b92",
|
||||
Labels: []string{
|
||||
"argoproj",
|
||||
"atom",
|
||||
"electron",
|
||||
"api",
|
||||
},
|
||||
RepositoryId: idptr(1296269),
|
||||
},
|
||||
},
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{},
|
||||
},
|
||||
},
|
||||
}
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
|
|
@ -242,26 +672,18 @@ func TestGithubListRepos(t *testing.T) {
|
|||
defer ts.Close()
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
provider, _ := NewGithubProvider("argoproj", "", ts.URL, c.allBranches)
|
||||
provider, _ := NewGithubProvider("argoproj", "", ts.URL, c.allBranches, c.excludeArchivedRepos)
|
||||
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
|
||||
if c.hasError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
// Just check that this one project shows up. Not a great test but better thing nothing?
|
||||
repos := []*Repository{}
|
||||
branches := []string{}
|
||||
for _, r := range rawRepos {
|
||||
if r.Repository == "argo-cd" {
|
||||
repos = append(repos, r)
|
||||
branches = append(branches, r.Branch)
|
||||
}
|
||||
}
|
||||
repos = append(rawRepos, repos...)
|
||||
|
||||
assert.NotEmpty(t, repos)
|
||||
assert.Equal(t, c.url, repos[0].URL)
|
||||
for _, b := range c.branches {
|
||||
assert.Contains(t, branches, b)
|
||||
}
|
||||
assert.Len(t, repos, len(c.expectedRepos))
|
||||
assert.ElementsMatch(t, c.expectedRepos, repos)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
@ -280,7 +702,7 @@ func TestGithubHasPath(t *testing.T) {
|
|||
githubMockHandler(t)(w, r)
|
||||
}))
|
||||
defer ts.Close()
|
||||
host, _ := NewGithubProvider("argoproj", "", ts.URL, false)
|
||||
host, _ := NewGithubProvider("argoproj", "", ts.URL, false, false)
|
||||
repo := &Repository{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
|
|
@ -300,7 +722,7 @@ func TestGithubGetBranches(t *testing.T) {
|
|||
githubMockHandler(t)(w, r)
|
||||
}))
|
||||
defer ts.Close()
|
||||
host, _ := NewGithubProvider("argoproj", "", ts.URL, false)
|
||||
host, _ := NewGithubProvider("argoproj", "", ts.URL, false, false)
|
||||
repo := &Repository{
|
||||
Organization: "argoproj",
|
||||
Repository: "argo-cd",
|
||||
|
|
@ -328,6 +750,6 @@ func TestGithubGetBranches(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
} else {
|
||||
// considering master branch to exist.
|
||||
assert.Len(t, repos, 1)
|
||||
assert.Len(t, repos, 2)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -19,12 +19,13 @@ type GitlabProvider struct {
|
|||
allBranches bool
|
||||
includeSubgroups bool
|
||||
includeSharedProjects bool
|
||||
includeArchivedRepos bool
|
||||
topic string
|
||||
}
|
||||
|
||||
var _ SCMProviderService = &GitlabProvider{}
|
||||
|
||||
func NewGitlabProvider(organization string, token string, url string, allBranches, includeSubgroups, includeSharedProjects, insecure bool, scmRootCAPath, topic string, caCerts []byte) (*GitlabProvider, error) {
|
||||
func NewGitlabProvider(organization string, token string, url string, allBranches, includeSubgroups, includeSharedProjects, includeArchivedRepos, insecure bool, scmRootCAPath, topic string, caCerts []byte) (*GitlabProvider, error) {
|
||||
// Undocumented environment variable to set a default token, to be used in testing to dodge anonymous rate limits.
|
||||
if token == "" {
|
||||
token = os.Getenv("GITLAB_TOKEN")
|
||||
|
|
@ -51,7 +52,15 @@ func NewGitlabProvider(organization string, token string, url string, allBranche
|
|||
}
|
||||
}
|
||||
|
||||
return &GitlabProvider{client: client, organization: organization, allBranches: allBranches, includeSubgroups: includeSubgroups, includeSharedProjects: includeSharedProjects, topic: topic}, nil
|
||||
return &GitlabProvider{
|
||||
client: client,
|
||||
organization: organization,
|
||||
allBranches: allBranches,
|
||||
includeSubgroups: includeSubgroups,
|
||||
includeSharedProjects: includeSharedProjects,
|
||||
includeArchivedRepos: includeArchivedRepos,
|
||||
topic: topic,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (g *GitlabProvider) GetBranches(ctx context.Context, repo *Repository) ([]*Repository, error) {
|
||||
|
|
@ -88,6 +97,11 @@ func (g *GitlabProvider) ListRepos(_ context.Context, cloneProtocol string) ([]*
|
|||
Topic: &g.topic,
|
||||
}
|
||||
|
||||
// gitlab does not include Archived repos by default
|
||||
if g.includeArchivedRepos {
|
||||
opt.Archived = gitlab.Ptr(true)
|
||||
}
|
||||
|
||||
repos := []*Repository{}
|
||||
for {
|
||||
gitlabRepos, resp, err := g.client.Groups.ListGroupProjects(g.organization, opt)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ package scm_provider
|
|||
import (
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
|
|
@ -19,12 +18,9 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
t.Helper()
|
||||
return func(w http.ResponseWriter, r *http.Request) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
fmt.Println(r.RequestURI)
|
||||
switch r.RequestURI {
|
||||
case "/api/v4":
|
||||
fmt.Println("here1")
|
||||
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=&with_shared=false":
|
||||
fmt.Println("here")
|
||||
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=&with_shared=false", "/api/v4/groups/test-argocd-proton/projects?archived=false&include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?archived=false&include_subgroups=false&per_page=100&topic=&with_shared=false":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
|
|
@ -151,8 +147,253 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/groups/test-argocd-proton/projects?archived=true&include_subgroups=false&per_page=100", "/api/v4/groups/test-argocd-proton/projects?archived=true&include_subgroups=false&per_page=100&topic=&with_shared=false":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
"name": "argocd",
|
||||
"name_with_namespace": "test argocd proton / argocd",
|
||||
"path": "argocd",
|
||||
"path_with_namespace": "test-argocd-proton/argocd",
|
||||
"created_at": "2021-06-01T17:30:44.724Z",
|
||||
"default_branch": "master",
|
||||
"tag_list": [],
|
||||
"topics": [],
|
||||
"ssh_url_to_repo": "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
"http_url_to_repo": "https://gitlab.com/test-argocd-proton/argocd.git",
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/argocd",
|
||||
"readme_url": null,
|
||||
"avatar_url": null,
|
||||
"forks_count": 0,
|
||||
"star_count": 0,
|
||||
"last_activity_at": "2021-06-04T08:19:51.656Z",
|
||||
"namespace": {
|
||||
"id": 12258515,
|
||||
"name": "test argocd proton",
|
||||
"path": "test-argocd-proton",
|
||||
"kind": "gro* Connection #0 to host gitlab.com left intact up ",
|
||||
"full_path ": "test - argocd - proton ",
|
||||
"parent_id ": null,
|
||||
"avatar_url ": null,
|
||||
"web_url ": "https: //gitlab.com/groups/test-argocd-proton"
|
||||
},
|
||||
"container_registry_image_prefix": "registry.gitlab.com/test-argocd-proton/argocd",
|
||||
"_links": {
|
||||
"self": "https://gitlab.com/api/v4/projects/27084533",
|
||||
"issues": "https://gitlab.com/api/v4/projects/27084533/issues",
|
||||
"merge_requests": "https://gitlab.com/api/v4/projects/27084533/merge_requests",
|
||||
"repo_branches": "https://gitlab.com/api/v4/projects/27084533/repository/branches",
|
||||
"labels": "https://gitlab.com/api/v4/projects/27084533/labels",
|
||||
"events": "https://gitlab.com/api/v4/projects/27084533/events",
|
||||
"members": "https://gitlab.com/api/v4/projects/27084533/members",
|
||||
"cluster_agents": "https://gitlab.com/api/v4/projects/27084533/cluster_agents"
|
||||
},
|
||||
"packages_enabled": true,
|
||||
"empty_repo": false,
|
||||
"archived": false,
|
||||
"visibility": "public",
|
||||
"resolve_outdated_diff_discussions": false,
|
||||
"container_expiration_policy": {
|
||||
"cadence": "1d",
|
||||
"enabled": false,
|
||||
"keep_n": 10,
|
||||
"older_than": "90d",
|
||||
"name_regex": ".*",
|
||||
"name_regex_keep": null,
|
||||
"next_run_at": "2021-06-02T17:30:44.740Z"
|
||||
},
|
||||
"issues_enabled": true,
|
||||
"merge_requests_enabled": true,
|
||||
"wiki_enabled": true,
|
||||
"jobs_enabled": true,
|
||||
"snippets_enabled": true,
|
||||
"container_registry_enabled": true,
|
||||
"service_desk_enabled": true,
|
||||
"can_create_merge_request_in": false,
|
||||
"issues_access_level": "enabled",
|
||||
"repository_access_level": "enabled",
|
||||
"merge_requests_access_level": "enabled",
|
||||
"forking_access_level": "enabled",
|
||||
"wiki_access_level": "enabled",
|
||||
"builds_access_level": "enabled",
|
||||
"snippets_access_level": "enabled",
|
||||
"pages_access_level": "enabled",
|
||||
"operations_access_level": "enabled",
|
||||
"analytics_access_level": "enabled",
|
||||
"container_registry_access_level": "enabled",
|
||||
"security_and_compliance_access_level": "private",
|
||||
"emails_disabled": null,
|
||||
"shared_runners_enabled": true,
|
||||
"lfs_enabled": true,
|
||||
"creator_id": 2378866,
|
||||
"import_status": "none",
|
||||
"open_issues_count": 0,
|
||||
"ci_default_git_depth": 50,
|
||||
"ci_forward_deployment_enabled": true,
|
||||
"ci_job_token_scope_enabled": false,
|
||||
"public_jobs": true,
|
||||
"build_timeout": 3600,
|
||||
"auto_cancel_pending_pipelines": "enabled",
|
||||
"ci_config_path": "",
|
||||
"shared_with_groups": [],
|
||||
"only_allow_merge_if_pipeline_succeeds": false,
|
||||
"allow_merge_on_skipped_pipeline": null,
|
||||
"restrict_user_defined_variables": false,
|
||||
"request_access_enabled": true,
|
||||
"only_allow_merge_if_all_discussions_are_resolved": false,
|
||||
"remove_source_branch_after_merge": true,
|
||||
"printing_merge_request_link_enabled": true,
|
||||
"merge_method": "merge",
|
||||
"squash_option": "default_off",
|
||||
"suggestion_commit_message": null,
|
||||
"merge_commit_template": null,
|
||||
"squash_commit_template": null,
|
||||
"auto_devops_enabled": false,
|
||||
"auto_devops_deploy_strategy": "continuous",
|
||||
"autoclose_referenced_issues": true,
|
||||
"keep_latest_artifact": true,
|
||||
"runner_token_expiration_interval": null,
|
||||
"approvals_before_merge": 0,
|
||||
"mirror": false,
|
||||
"external_authorization_classification_label": "",
|
||||
"marked_for_deletion_at": null,
|
||||
"marked_for_deletion_on": null,
|
||||
"requirements_enabled": true,
|
||||
"requirements_access_level": "enabled",
|
||||
"security_and_compliance_enabled": false,
|
||||
"compliance_frameworks": [],
|
||||
"issues_template": null,
|
||||
"merge_requests_template": null,
|
||||
"merge_pipelines_enabled": false,
|
||||
"merge_trains_enabled": false
|
||||
},
|
||||
{
|
||||
"id": 56522142,
|
||||
"description": "",
|
||||
"name": "another-repo",
|
||||
"name_with_namespace": "test argocd proton / another-repo",
|
||||
"path": "another-repo",
|
||||
"path_with_namespace": "test-argocd-proton/another-repo",
|
||||
"created_at": "2022-09-13T12:10:14.722Z",
|
||||
"default_branch": "master",
|
||||
"tag_list": [
|
||||
"test-topic"
|
||||
],
|
||||
"topics": [
|
||||
"test-topic"
|
||||
],
|
||||
"ssh_url_to_repo": "git@gitlab.com:test-argocd-proton/another-repo.git",
|
||||
"http_url_to_repo": "https://gitlab.com/test-argocd-proton/another-repo.git",
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/another-repo",
|
||||
"readme_url": null,
|
||||
"avatar_url": null,
|
||||
"forks_count": 0,
|
||||
"star_count": 0,
|
||||
"last_activity_at": "2021-06-04T08:19:51.656Z",
|
||||
"namespace": {
|
||||
"id": 12258515,
|
||||
"name": "test argocd proton",
|
||||
"path": "test-argocd-proton",
|
||||
"kind": "gro* Connection #0 to host gitlab.com left intact up ",
|
||||
"full_path ": "test - argocd - proton ",
|
||||
"parent_id ": null,
|
||||
"avatar_url ": null,
|
||||
"web_url ": "https: //gitlab.com/groups/test-argocd-proton"
|
||||
},
|
||||
"container_registry_image_prefix": "registry.gitlab.com/test-argocd-proton/another-repo",
|
||||
"_links": {
|
||||
"self": "https://gitlab.com/api/v4/projects/56522142",
|
||||
"issues": "https://gitlab.com/api/v4/projects/56522142/issues",
|
||||
"merge_requests": "https://gitlab.com/api/v4/projects/56522142/merge_requests",
|
||||
"repo_branches": "https://gitlab.com/api/v4/projects/56522142/repository/branches",
|
||||
"labels": "https://gitlab.com/api/v4/projects/56522142/labels",
|
||||
"events": "https://gitlab.com/api/v4/projects/56522142/events",
|
||||
"members": "https://gitlab.com/api/v4/projects/56522142/members",
|
||||
"cluster_agents": "https://gitlab.com/api/v4/projects/56522142/cluster_agents"
|
||||
},
|
||||
"packages_enabled": true,
|
||||
"empty_repo": false,
|
||||
"archived": true,
|
||||
"visibility": "public",
|
||||
"resolve_outdated_diff_discussions": false,
|
||||
"container_expiration_policy": {
|
||||
"cadence": "1d",
|
||||
"enabled": false,
|
||||
"keep_n": 10,
|
||||
"older_than": "90d",
|
||||
"name_regex": ".*",
|
||||
"name_regex_keep": null,
|
||||
"next_run_at": "2021-06-02T17:30:44.740Z"
|
||||
},
|
||||
"issues_enabled": true,
|
||||
"merge_requests_enabled": true,
|
||||
"wiki_enabled": true,
|
||||
"jobs_enabled": true,
|
||||
"snippets_enabled": true,
|
||||
"container_registry_enabled": true,
|
||||
"service_desk_enabled": true,
|
||||
"can_create_merge_request_in": false,
|
||||
"issues_access_level": "enabled",
|
||||
"repository_access_level": "enabled",
|
||||
"merge_requests_access_level": "enabled",
|
||||
"forking_access_level": "enabled",
|
||||
"wiki_access_level": "enabled",
|
||||
"builds_access_level": "enabled",
|
||||
"snippets_access_level": "enabled",
|
||||
"pages_access_level": "enabled",
|
||||
"operations_access_level": "enabled",
|
||||
"analytics_access_level": "enabled",
|
||||
"container_registry_access_level": "enabled",
|
||||
"security_and_compliance_access_level": "private",
|
||||
"emails_disabled": null,
|
||||
"shared_runners_enabled": true,
|
||||
"lfs_enabled": true,
|
||||
"creator_id": 2378866,
|
||||
"import_status": "none",
|
||||
"open_issues_count": 0,
|
||||
"ci_default_git_depth": 50,
|
||||
"ci_forward_deployment_enabled": true,
|
||||
"ci_job_token_scope_enabled": false,
|
||||
"public_jobs": true,
|
||||
"build_timeout": 3600,
|
||||
"auto_cancel_pending_pipelines": "enabled",
|
||||
"ci_config_path": "",
|
||||
"shared_with_groups": [],
|
||||
"only_allow_merge_if_pipeline_succeeds": false,
|
||||
"allow_merge_on_skipped_pipeline": null,
|
||||
"restrict_user_defined_variables": false,
|
||||
"request_access_enabled": true,
|
||||
"only_allow_merge_if_all_discussions_are_resolved": false,
|
||||
"remove_source_branch_after_merge": true,
|
||||
"printing_merge_request_link_enabled": true,
|
||||
"merge_method": "merge",
|
||||
"squash_option": "default_off",
|
||||
"suggestion_commit_message": null,
|
||||
"merge_commit_template": null,
|
||||
"squash_commit_template": null,
|
||||
"auto_devops_enabled": false,
|
||||
"auto_devops_deploy_strategy": "continuous",
|
||||
"autoclose_referenced_issues": true,
|
||||
"keep_latest_artifact": true,
|
||||
"runner_token_expiration_interval": null,
|
||||
"approvals_before_merge": 0,
|
||||
"mirror": false,
|
||||
"external_authorization_classification_label": "",
|
||||
"marked_for_deletion_at": null,
|
||||
"marked_for_deletion_on": null,
|
||||
"requirements_enabled": true,
|
||||
"requirements_access_level": "enabled",
|
||||
"security_and_compliance_enabled": false,
|
||||
"compliance_frameworks": [],
|
||||
"issues_template": null,
|
||||
"merge_requests_template": null,
|
||||
"merge_pipelines_enabled": false,
|
||||
"merge_trains_enabled": false
|
||||
}]`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=true&per_page=100&topic=&with_shared=false":
|
||||
fmt.Println("here")
|
||||
_, err := io.WriteString(w, `[{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
|
|
@ -406,7 +647,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
t.Fail()
|
||||
}
|
||||
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=false&per_page=100&topic=specific-topic&with_shared=false":
|
||||
fmt.Println("here")
|
||||
_, err := io.WriteString(w, `[{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
|
|
@ -537,7 +777,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
t.Fail()
|
||||
}
|
||||
case "/api/v4/groups/test-argocd-proton/projects?include_subgroups=true&per_page=100&topic=&with_shared=true":
|
||||
fmt.Println("here")
|
||||
_, err := io.WriteString(w, `[{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
|
|
@ -796,7 +1035,6 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/27084533/repository/branches/master":
|
||||
fmt.Println("returning")
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "master",
|
||||
"commit": {
|
||||
|
|
@ -826,6 +1064,36 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/56522142/repository/branches/master":
|
||||
_, err := io.WriteString(w, `{
|
||||
"name": "master",
|
||||
"commit": {
|
||||
"id": "9998d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
"short_id": "9998d799",
|
||||
"created_at": "2023-08-04T08:14:14.000+00:00",
|
||||
"parent_ids": ["5d9d50be1ef949ad28674e238c7e12a17b1e9706", "99482e001731640b4123cf177e51c696f08a3005"],
|
||||
"title": "Merge branch 'pipeline-4547911429' into 'master'",
|
||||
"message": "Merge branch 'pipeline-4547911429' into 'master'\n\n[testapp-ci] manifests/demo/test-app.yaml: release v1.2.0\n\nSee merge request test-argocd-proton/argocd!3",
|
||||
"author_name": "Martin Vozník",
|
||||
"author_email": "martin@voznik.cz",
|
||||
"authored_date": "2023-08-04T08:14:14.000+00:00",
|
||||
"committer_name": "Martin Vozník",
|
||||
"committer_email": "martin@voznik.cz",
|
||||
"committed_date": "2023-08-04T08:14:14.000+00:00",
|
||||
"trailers": {},
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/argocd/-/commit/9998d7999fc99dd0fd578650b58b244fc63f6b53"
|
||||
},
|
||||
"merged": false,
|
||||
"protected": true,
|
||||
"developers_can_push": false,
|
||||
"developers_can_merge": false,
|
||||
"can_push": false,
|
||||
"default": true,
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/argocd/-/tree/master"
|
||||
}`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/27084533/repository/branches?per_page=100":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"name": "master",
|
||||
|
|
@ -991,8 +1259,62 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/56522142/repository/branches?per_page=100":
|
||||
_, err := io.WriteString(w, `[{
|
||||
"name": "master",
|
||||
"commit": {
|
||||
"id": "8898d8889fc99dd0fd578650b58b244fc63f6b58",
|
||||
"short_id": "8898d801",
|
||||
"created_at": "2021-06-04T08:24:44.000+00:00",
|
||||
"parent_ids": null,
|
||||
"title": "Merge branch 'pipeline-1317911429' into 'master'",
|
||||
"message": "Merge branch 'pipeline-1317911429' into 'master'",
|
||||
"author_name": "Martin Vozník",
|
||||
"author_email": "martin@voznik.cz",
|
||||
"authored_date": "2021-06-04T08:24:44.000+00:00",
|
||||
"committer_name": "Martin Vozník",
|
||||
"committer_email": "martin@voznik.cz",
|
||||
"committed_date": "2021-06-04T08:24:44.000+00:00",
|
||||
"trailers": null,
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/commit/8898d7999fc99dd0fd578650b58b244fc63f6b53"
|
||||
},
|
||||
"merged": false,
|
||||
"protected": true,
|
||||
"developers_can_push": false,
|
||||
"developers_can_merge": false,
|
||||
"can_push": false,
|
||||
"default": true,
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/tree/master"
|
||||
}, {
|
||||
"name": "pipeline-2310077506",
|
||||
"commit": {
|
||||
"id": "0f92540e5f396ba960adea4ed0aa905baf3f73d1",
|
||||
"short_id": "0f92540e",
|
||||
"created_at": "2021-06-01T18:39:59.000+00:00",
|
||||
"parent_ids": null,
|
||||
"title": "[testapp-ci] manifests/demo/test-app.yaml: release v1.0.1",
|
||||
"message": "[testapp-ci] manifests/demo/test-app.yaml: release v1.0.1",
|
||||
"author_name": "ci-test-app",
|
||||
"author_email": "mvoznik+cicd@protonmail.com",
|
||||
"authored_date": "2021-06-01T18:39:59.000+00:00",
|
||||
"committer_name": "ci-test-app",
|
||||
"committer_email": "mvoznik+cicd@protonmail.com",
|
||||
"committed_date": "2021-06-01T18:39:59.000+00:00",
|
||||
"trailers": null,
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/commit/0f92540e5f396ba960adea4ed0aa905baf3f73d1"
|
||||
},
|
||||
"merged": false,
|
||||
"protected": false,
|
||||
"developers_can_push": false,
|
||||
"developers_can_merge": false,
|
||||
"can_push": false,
|
||||
"default": false,
|
||||
"web_url": "https://gitlab.com/test-argocd-proton/subgroup/argocd-subgroup/-/tree/pipeline-1310077506"
|
||||
}]`)
|
||||
if err != nil {
|
||||
t.Fail()
|
||||
}
|
||||
case "/api/v4/projects/test-argocd-proton%2Fargocd":
|
||||
fmt.Println("auct")
|
||||
_, err := io.WriteString(w, `{
|
||||
"id": 27084533,
|
||||
"description": "",
|
||||
|
|
@ -1079,35 +1401,94 @@ func gitlabMockHandler(t *testing.T) func(http.ResponseWriter, *http.Request) {
|
|||
|
||||
func TestGitlabListRepos(t *testing.T) {
|
||||
cases := []struct {
|
||||
name, proto, url, topic string
|
||||
hasError, allBranches, includeSubgroups, includeSharedProjects, insecure bool
|
||||
branches []string
|
||||
filters []v1alpha1.SCMProviderGeneratorFilter
|
||||
name, proto, topic string
|
||||
hasError, allBranches, includeSubgroups, includeSharedProjects, includeArchivedRepos, insecure bool
|
||||
branches []string
|
||||
expectedRepos []*Repository
|
||||
filters []v1alpha1.SCMProviderGeneratorFilter
|
||||
}{
|
||||
{
|
||||
name: "blank protocol",
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
name: "blank protocol",
|
||||
allBranches: false,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"master"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ssh protocol",
|
||||
proto: "ssh",
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
name: "ssh protocol",
|
||||
proto: "ssh",
|
||||
allBranches: false,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"master"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "labelmatch",
|
||||
proto: "ssh",
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
name: "https protocol",
|
||||
proto: "https",
|
||||
allBranches: false,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"master"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "https://gitlab.com/test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "labelmatch",
|
||||
proto: "ssh",
|
||||
allBranches: false,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{
|
||||
{
|
||||
LabelMatch: new("test-topic"),
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "https protocol",
|
||||
proto: "https",
|
||||
url: "https://gitlab.com/test-argocd-proton/argocd.git",
|
||||
|
||||
branches: []string{"master"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "other protocol",
|
||||
|
|
@ -1115,34 +1496,133 @@ func TestGitlabListRepos(t *testing.T) {
|
|||
hasError: true,
|
||||
},
|
||||
{
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
branches: []string{"master"},
|
||||
name: "all branches",
|
||||
allBranches: true,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
branches: []string{"master"},
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all subgroups",
|
||||
allBranches: true,
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
branches: []string{"master"},
|
||||
includeSharedProjects: false,
|
||||
includeSubgroups: true,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic", "specific-topic"},
|
||||
},
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd-subgroup",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/subgroup/argocd-subgroup.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b58",
|
||||
RepositoryId: int64(27084538),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all subgroups and shared projects",
|
||||
allBranches: true,
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
branches: []string{"master"},
|
||||
includeSharedProjects: true,
|
||||
includeSubgroups: true,
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "shared-argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-shared-argocd-proton/shared-argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084534),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "specific topic",
|
||||
allBranches: true,
|
||||
url: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
branches: []string{"master"},
|
||||
includeSubgroups: false,
|
||||
topic: "specific-topic",
|
||||
name: "specific topic",
|
||||
allBranches: true,
|
||||
branches: []string{"master"},
|
||||
includeSubgroups: false,
|
||||
topic: "specific-topic",
|
||||
includeArchivedRepos: false,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{"test-topic", "specific-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all branches with archived repos",
|
||||
allBranches: true,
|
||||
branches: []string{"master"},
|
||||
includeSubgroups: false,
|
||||
includeArchivedRepos: true,
|
||||
filters: []v1alpha1.SCMProviderGeneratorFilter{},
|
||||
|
||||
expectedRepos: []*Repository{
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "argocd",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/argocd.git",
|
||||
SHA: "8898d7999fc99dd0fd578650b58b244fc63f6b53",
|
||||
RepositoryId: int64(27084533),
|
||||
Labels: []string{},
|
||||
},
|
||||
{
|
||||
Organization: "",
|
||||
Repository: "another-repo",
|
||||
Branch: "master",
|
||||
URL: "git@gitlab.com:test-argocd-proton/another-repo.git",
|
||||
SHA: "8898d8889fc99dd0fd578650b58b244fc63f6b58",
|
||||
RepositoryId: int64(56522142),
|
||||
Labels: []string{"test-topic"},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
|
|
@ -1150,28 +1630,24 @@ func TestGitlabListRepos(t *testing.T) {
|
|||
}))
|
||||
for _, c := range cases {
|
||||
t.Run(c.name, func(t *testing.T) {
|
||||
provider, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, c.allBranches, c.includeSubgroups, c.includeSharedProjects, c.insecure, "", c.topic, nil)
|
||||
provider, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, c.allBranches, c.includeSubgroups, c.includeSharedProjects, c.includeArchivedRepos, c.insecure, "", c.topic, nil)
|
||||
rawRepos, err := ListRepos(t.Context(), provider, c.filters, c.proto)
|
||||
if c.hasError {
|
||||
require.Error(t, err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
// Just check that this one project shows up. Not a great test but better than nothing?
|
||||
|
||||
repos := []*Repository{}
|
||||
uniqueRepos := map[string]int{}
|
||||
branches := []string{}
|
||||
|
||||
for _, r := range rawRepos {
|
||||
if r.Repository == "argocd" {
|
||||
if _, ok := uniqueRepos[r.Repository]; !ok {
|
||||
repos = append(repos, r)
|
||||
branches = append(branches, r.Branch)
|
||||
}
|
||||
uniqueRepos[r.Repository]++
|
||||
}
|
||||
assert.NotEmpty(t, repos)
|
||||
assert.Equal(t, c.url, repos[0].URL)
|
||||
for _, b := range c.branches {
|
||||
assert.Contains(t, branches, b)
|
||||
}
|
||||
|
||||
// In case of listing subgroups, validate the number of returned projects
|
||||
if c.includeSubgroups || c.includeSharedProjects {
|
||||
assert.Len(t, uniqueRepos, 2)
|
||||
|
|
@ -1180,6 +1656,8 @@ func TestGitlabListRepos(t *testing.T) {
|
|||
if c.topic != "" {
|
||||
assert.Len(t, uniqueRepos, 1)
|
||||
}
|
||||
assert.Len(t, repos, len(c.expectedRepos))
|
||||
assert.ElementsMatch(t, c.expectedRepos, repos)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
@ -1189,7 +1667,7 @@ func TestGitlabHasPath(t *testing.T) {
|
|||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gitlabMockHandler(t)(w, r)
|
||||
}))
|
||||
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, "", "", nil)
|
||||
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, false, "", "", nil)
|
||||
repo := &Repository{
|
||||
Organization: "test-argocd-proton",
|
||||
Repository: "argocd",
|
||||
|
|
@ -1245,10 +1723,10 @@ func TestGitlabGetBranches(t *testing.T) {
|
|||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
gitlabMockHandler(t)(w, r)
|
||||
}))
|
||||
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, "", "", nil)
|
||||
host, _ := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, false, "", "", nil)
|
||||
|
||||
repo := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
RepositoryId: int64(27084533),
|
||||
Branch: "master",
|
||||
}
|
||||
t.Run("branch exists", func(t *testing.T) {
|
||||
|
|
@ -1258,7 +1736,7 @@ func TestGitlabGetBranches(t *testing.T) {
|
|||
})
|
||||
|
||||
repo2 := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
RepositoryId: int64(27084533),
|
||||
Branch: "foo",
|
||||
}
|
||||
t.Run("unknown branch", func(t *testing.T) {
|
||||
|
|
@ -1321,10 +1799,10 @@ func TestGetBranchesTLS(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
host, err := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, test.tlsInsecure, "", "", certs)
|
||||
host, err := NewGitlabProvider("test-argocd-proton", "", ts.URL, false, true, true, false, test.tlsInsecure, "", "", certs)
|
||||
require.NoError(t, err)
|
||||
repo := &Repository{
|
||||
RepositoryId: 27084533,
|
||||
RepositoryId: int64(27084533),
|
||||
Branch: "master",
|
||||
}
|
||||
_, err = host.GetBranches(t.Context(), repo)
|
||||
|
|
|
|||
|
|
@ -24,6 +24,43 @@ import (
|
|||
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
|
||||
)
|
||||
|
||||
var appEquality = conversion.EqualitiesOrDie(
|
||||
func(a, b resource.Quantity) bool {
|
||||
// Ignore formatting, only care that numeric value stayed the same.
|
||||
// TODO: if we decide it's important, it should be safe to start comparing the format.
|
||||
//
|
||||
// Uninitialized quantities are equivalent to 0 quantities.
|
||||
return a.Cmp(b) == 0
|
||||
},
|
||||
func(a, b metav1.MicroTime) bool {
|
||||
return a.UTC().Equal(b.UTC())
|
||||
},
|
||||
func(a, b metav1.Time) bool {
|
||||
return a.UTC().Equal(b.UTC())
|
||||
},
|
||||
func(a, b labels.Selector) bool {
|
||||
return a.String() == b.String()
|
||||
},
|
||||
func(a, b fields.Selector) bool {
|
||||
return a.String() == b.String()
|
||||
},
|
||||
func(a, b argov1alpha1.ApplicationDestination) bool {
|
||||
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
|
||||
},
|
||||
)
|
||||
|
||||
// BuildIgnoreDiffConfig constructs a DiffConfig from the ApplicationSet's ignoreDifferences rules.
|
||||
// Returns nil when ignoreDifferences is empty.
|
||||
func BuildIgnoreDiffConfig(ignoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) (argodiff.DiffConfig, error) {
|
||||
if len(ignoreDifferences) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
return argodiff.NewDiffConfigBuilder().
|
||||
WithDiffSettings(ignoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
|
||||
WithNoCache().
|
||||
Build()
|
||||
}
|
||||
|
||||
// CreateOrUpdate overrides "sigs.k8s.io/controller-runtime" function
|
||||
// in sigs.k8s.io/controller-runtime/pkg/controller/controllerutil/controllerutil.go
|
||||
// to add equality for argov1alpha1.ApplicationDestination
|
||||
|
|
@ -34,10 +71,15 @@ import (
|
|||
// cluster. The object's desired state must be reconciled with the existing
|
||||
// state inside the passed in callback MutateFn.
|
||||
//
|
||||
// diffConfig must be built once per reconcile cycle via BuildIgnoreDiffConfig and may be nil
|
||||
// when there are no ignoreDifferences rules. obj.Spec must already be normalized by the caller
|
||||
// via NormalizeApplicationSpec before this function is called; the live object fetched from the
|
||||
// cluster is normalized internally.
|
||||
//
|
||||
// The MutateFn is called regardless of creating or updating an object.
|
||||
//
|
||||
// It returns the executed operation and an error.
|
||||
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ignoreAppDifferences argov1alpha1.ApplicationSetIgnoreDifferences, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
|
||||
func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, diffConfig argodiff.DiffConfig, obj *argov1alpha1.Application, f controllerutil.MutateFn) (controllerutil.OperationResult, error) {
|
||||
key := client.ObjectKeyFromObject(obj)
|
||||
if err := c.Get(ctx, key, obj); err != nil {
|
||||
if !errors.IsNotFound(err) {
|
||||
|
|
@ -59,43 +101,18 @@ func CreateOrUpdate(ctx context.Context, logCtx *log.Entry, c client.Client, ign
|
|||
return controllerutil.OperationResultNone, err
|
||||
}
|
||||
|
||||
// Normalize the live spec to avoid spurious diffs from unimportant differences (e.g. nil vs
|
||||
// empty SyncPolicy). obj.Spec is already normalized by the caller; only the live side needs it.
|
||||
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
|
||||
|
||||
// Apply ignoreApplicationDifferences rules to remove ignored fields from both the live and the desired state. This
|
||||
// prevents those differences from appearing in the diff and therefore in the patch.
|
||||
err := applyIgnoreDifferences(ignoreAppDifferences, normalizedLive, obj, ignoreNormalizerOpts)
|
||||
err := applyIgnoreDifferences(diffConfig, normalizedLive, obj)
|
||||
if err != nil {
|
||||
return controllerutil.OperationResultNone, fmt.Errorf("failed to apply ignore differences: %w", err)
|
||||
}
|
||||
|
||||
// Normalize to avoid diffing on unimportant differences.
|
||||
normalizedLive.Spec = *argo.NormalizeApplicationSpec(&normalizedLive.Spec)
|
||||
obj.Spec = *argo.NormalizeApplicationSpec(&obj.Spec)
|
||||
|
||||
equality := conversion.EqualitiesOrDie(
|
||||
func(a, b resource.Quantity) bool {
|
||||
// Ignore formatting, only care that numeric value stayed the same.
|
||||
// TODO: if we decide it's important, it should be safe to start comparing the format.
|
||||
//
|
||||
// Uninitialized quantities are equivalent to 0 quantities.
|
||||
return a.Cmp(b) == 0
|
||||
},
|
||||
func(a, b metav1.MicroTime) bool {
|
||||
return a.UTC().Equal(b.UTC())
|
||||
},
|
||||
func(a, b metav1.Time) bool {
|
||||
return a.UTC().Equal(b.UTC())
|
||||
},
|
||||
func(a, b labels.Selector) bool {
|
||||
return a.String() == b.String()
|
||||
},
|
||||
func(a, b fields.Selector) bool {
|
||||
return a.String() == b.String()
|
||||
},
|
||||
func(a, b argov1alpha1.ApplicationDestination) bool {
|
||||
return a.Namespace == b.Namespace && a.Name == b.Name && a.Server == b.Server
|
||||
},
|
||||
)
|
||||
|
||||
if equality.DeepEqual(normalizedLive, obj) {
|
||||
if appEquality.DeepEqual(normalizedLive, obj) {
|
||||
return controllerutil.OperationResultNone, nil
|
||||
}
|
||||
|
||||
|
|
@ -135,19 +152,13 @@ func mutate(f controllerutil.MutateFn, key client.ObjectKey, obj client.Object)
|
|||
}
|
||||
|
||||
// applyIgnoreDifferences applies the ignore differences rules to the found application. It modifies the applications in place.
|
||||
func applyIgnoreDifferences(applicationSetIgnoreDifferences argov1alpha1.ApplicationSetIgnoreDifferences, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application, ignoreNormalizerOpts normalizers.IgnoreNormalizerOpts) error {
|
||||
if len(applicationSetIgnoreDifferences) == 0 {
|
||||
// diffConfig may be nil, in which case this is a no-op.
|
||||
func applyIgnoreDifferences(diffConfig argodiff.DiffConfig, found *argov1alpha1.Application, generatedApp *argov1alpha1.Application) error {
|
||||
if diffConfig == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
generatedAppCopy := generatedApp.DeepCopy()
|
||||
diffConfig, err := argodiff.NewDiffConfigBuilder().
|
||||
WithDiffSettings(applicationSetIgnoreDifferences.ToApplicationIgnoreDifferences(), nil, false, ignoreNormalizerOpts).
|
||||
WithNoCache().
|
||||
Build()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to build diff config: %w", err)
|
||||
}
|
||||
unstructuredFound, err := appToUnstructured(found)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to convert found application to unstructured: %w", err)
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import (
|
|||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"gopkg.in/yaml.v3"
|
||||
"go.yaml.in/yaml/v3"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
|
|
@ -224,7 +224,9 @@ spec:
|
|||
generatedApp := v1alpha1.Application{TypeMeta: appMeta}
|
||||
err = yaml.Unmarshal([]byte(tc.generatedApp), &generatedApp)
|
||||
require.NoError(t, err, tc.generatedApp)
|
||||
err = applyIgnoreDifferences(tc.ignoreDifferences, &foundApp, &generatedApp, normalizers.IgnoreNormalizerOpts{})
|
||||
diffConfig, err := BuildIgnoreDiffConfig(tc.ignoreDifferences, normalizers.IgnoreNormalizerOpts{})
|
||||
require.NoError(t, err)
|
||||
err = applyIgnoreDifferences(diffConfig, &foundApp, &generatedApp)
|
||||
require.NoError(t, err)
|
||||
yamlFound, err := yaml.Marshal(tc.foundApp)
|
||||
require.NoError(t, err)
|
||||
|
|
|
|||
100
assets/swagger.json
generated
100
assets/swagger.json
generated
|
|
@ -4039,6 +4039,30 @@
|
|||
"description": "Whether https should be disabled for an OCI repo.",
|
||||
"name": "insecureOciForceHttp",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Client ID.",
|
||||
"name": "azureServicePrincipalClientId",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Client Secret.",
|
||||
"name": "azureServicePrincipalClientSecret",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Tenant ID.",
|
||||
"name": "azureServicePrincipalTenantId",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Active Directory Endpoint.",
|
||||
"name": "azureActiveDirectoryEndpoint",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
|
|
@ -4946,6 +4970,30 @@
|
|||
"description": "Whether https should be disabled for an OCI repo.",
|
||||
"name": "insecureOciForceHttp",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Client ID.",
|
||||
"name": "azureServicePrincipalClientId",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Client Secret.",
|
||||
"name": "azureServicePrincipalClientSecret",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Service Principal Tenant ID.",
|
||||
"name": "azureServicePrincipalTenantId",
|
||||
"in": "query"
|
||||
},
|
||||
{
|
||||
"type": "string",
|
||||
"description": "Azure Active Directory Endpoint.",
|
||||
"name": "azureActiveDirectoryEndpoint",
|
||||
"in": "query"
|
||||
}
|
||||
],
|
||||
"responses": {
|
||||
|
|
@ -9519,6 +9567,22 @@
|
|||
"type": "object",
|
||||
"title": "RepoCreds holds the definition for repository credentials",
|
||||
"properties": {
|
||||
"azureActiveDirectoryEndpoint": {
|
||||
"type": "string",
|
||||
"title": "AzureActiveDirectoryEndpoint specifies the Azure Active Directory endpoint used for Service Principal authentication. If empty will default to https://login.microsoftonline.com"
|
||||
},
|
||||
"azureServicePrincipalClientId": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalClientId specifies the client ID of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"azureServicePrincipalClientSecret": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalClientSecret specifies the client secret of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"azureServicePrincipalTenantId": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalTenantId specifies the tenant ID of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"bearerToken": {
|
||||
"type": "string",
|
||||
"title": "BearerToken contains the bearer token used for Git BitBucket Data Center auth at the repo server"
|
||||
|
|
@ -9618,6 +9682,22 @@
|
|||
"type": "object",
|
||||
"title": "Repository is a repository holding application configurations",
|
||||
"properties": {
|
||||
"azureActiveDirectoryEndpoint": {
|
||||
"type": "string",
|
||||
"title": "AzureActiveDirectoryEndpoint specifies the Azure Active Directory endpoint used for Service Principal authentication. If empty will default to https://login.microsoftonline.com"
|
||||
},
|
||||
"azureServicePrincipalClientId": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalClientId specifies the client ID of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"azureServicePrincipalClientSecret": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalClientSecret specifies the client secret of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"azureServicePrincipalTenantId": {
|
||||
"type": "string",
|
||||
"title": "AzureServicePrincipalTenantId specifies the tenant ID of the Azure Service Principal used to access the repo"
|
||||
},
|
||||
"bearerToken": {
|
||||
"type": "string",
|
||||
"title": "BearerToken contains the bearer token used for Git BitBucket Data Center auth at the repo server"
|
||||
|
|
@ -9727,6 +9807,10 @@
|
|||
"username": {
|
||||
"type": "string",
|
||||
"title": "Username contains the user name used for authenticating at the remote repository"
|
||||
},
|
||||
"webhookManifestCacheWarmDisabled": {
|
||||
"description": "WebhookManifestCacheWarmDisabled disables manifest cache warming during webhook processing for this repository.\nWhen set, webhook handlers will only trigger reconciliation for affected applications and skip Redis cache\noperations for unaffected ones. Recommended for large monorepos with plain YAML manifests.",
|
||||
"type": "boolean"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
@ -10414,6 +10498,10 @@
|
|||
"description": "The Gitea URL to talk to. For example https://gitea.mydomain.com/.",
|
||||
"type": "string"
|
||||
},
|
||||
"excludeArchivedRepos": {
|
||||
"description": "Exclude repositories that are archived.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"insecure": {
|
||||
"type": "boolean",
|
||||
"title": "Allow self-signed TLS / Certificates; default: false"
|
||||
|
|
@ -10443,6 +10531,10 @@
|
|||
"description": "AppSecretName is a reference to a GitHub App repo-creds secret.",
|
||||
"type": "string"
|
||||
},
|
||||
"excludeArchivedRepos": {
|
||||
"description": "Exclude repositories that are archived.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"organization": {
|
||||
"description": "GitHub org to scan. Required.",
|
||||
"type": "string"
|
||||
|
|
@ -10471,6 +10563,10 @@
|
|||
"description": "Gitlab group to scan. Required. You can use either the project id (recommended) or the full namespaced path.",
|
||||
"type": "string"
|
||||
},
|
||||
"includeArchivedRepos": {
|
||||
"description": "Include repositories that are archived.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"includeSharedProjects": {
|
||||
"type": "boolean",
|
||||
"title": "When recursing through subgroups, also include shared Projects (true) or scan only the subgroups under same path (false). Defaults to \"true\""
|
||||
|
|
@ -10839,6 +10935,10 @@
|
|||
"type": "string",
|
||||
"title": "Schedule is the time the window will begin, specified in cron format"
|
||||
},
|
||||
"syncOverrun": {
|
||||
"type": "boolean",
|
||||
"title": "SyncOverrun allows ongoing syncs to continue in two scenarios:\nFor deny windows: allows syncs that started before the deny window became active to continue running\nFor allow windows: allows syncs that started during the allow window to continue after the window ends"
|
||||
},
|
||||
"timeZone": {
|
||||
"type": "string",
|
||||
"title": "TimeZone of the sync that will be applied to the schedule"
|
||||
|
|
|
|||
|
|
@ -79,6 +79,7 @@ func NewCommand() *cobra.Command {
|
|||
tokenRefStrictMode bool
|
||||
maxResourcesStatusCount int
|
||||
cacheSyncPeriod time.Duration
|
||||
concurrentApplicationUpdates int
|
||||
)
|
||||
scheme := runtime.NewScheme()
|
||||
_ = clientgoscheme.AddToScheme(scheme)
|
||||
|
|
@ -239,24 +240,25 @@ func NewCommand() *cobra.Command {
|
|||
})
|
||||
|
||||
if err = (&controllers.ApplicationSetReconciler{
|
||||
Generators: topLevelGenerators,
|
||||
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
|
||||
Renderer: &utils.Render{},
|
||||
Policy: policyObj,
|
||||
EnablePolicyOverride: enablePolicyOverride,
|
||||
KubeClientset: k8sClient,
|
||||
ArgoDB: argoCDDB,
|
||||
ArgoCDNamespace: namespace,
|
||||
ApplicationSetNamespaces: applicationSetNamespaces,
|
||||
EnableProgressiveSyncs: enableProgressiveSyncs,
|
||||
SCMRootCAPath: scmRootCAPath,
|
||||
GlobalPreservedAnnotations: globalPreservedAnnotations,
|
||||
GlobalPreservedLabels: globalPreservedLabels,
|
||||
Metrics: &metrics,
|
||||
MaxResourcesStatusCount: maxResourcesStatusCount,
|
||||
ClusterInformer: clusterInformer,
|
||||
Generators: topLevelGenerators,
|
||||
Client: utils.NewCacheSyncingClient(mgr.GetClient(), mgr.GetCache()),
|
||||
Scheme: mgr.GetScheme(),
|
||||
Recorder: mgr.GetEventRecorderFor("applicationset-controller"),
|
||||
Renderer: &utils.Render{},
|
||||
Policy: policyObj,
|
||||
EnablePolicyOverride: enablePolicyOverride,
|
||||
KubeClientset: k8sClient,
|
||||
ArgoDB: argoCDDB,
|
||||
ArgoCDNamespace: namespace,
|
||||
ApplicationSetNamespaces: applicationSetNamespaces,
|
||||
EnableProgressiveSyncs: enableProgressiveSyncs,
|
||||
SCMRootCAPath: scmRootCAPath,
|
||||
GlobalPreservedAnnotations: globalPreservedAnnotations,
|
||||
GlobalPreservedLabels: globalPreservedLabels,
|
||||
Metrics: &metrics,
|
||||
MaxResourcesStatusCount: maxResourcesStatusCount,
|
||||
ClusterInformer: clusterInformer,
|
||||
ConcurrentApplicationUpdates: concurrentApplicationUpdates,
|
||||
}).SetupWithManager(mgr, enableProgressiveSyncs, maxConcurrentReconciliations); err != nil {
|
||||
log.Error(err, "unable to create controller", "controller", "ApplicationSet")
|
||||
os.Exit(1)
|
||||
|
|
@ -303,6 +305,7 @@ func NewCommand() *cobra.Command {
|
|||
command.Flags().BoolVar(&enableGitHubAPIMetrics, "enable-github-api-metrics", env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_ENABLE_GITHUB_API_METRICS", false), "Enable GitHub API metrics for generators that use the GitHub API")
|
||||
command.Flags().IntVar(&maxResourcesStatusCount, "max-resources-status-count", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_MAX_RESOURCES_STATUS_COUNT", 5000, 0, math.MaxInt), "Max number of resources stored in appset status.")
|
||||
command.Flags().DurationVar(&cacheSyncPeriod, "cache-sync-period", env.ParseDurationFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CACHE_SYNC_PERIOD", time.Hour*10, 0, time.Hour*24), "Period at which the manager client cache is forcefully resynced with the Kubernetes API server. 0 disables periodic resync.")
|
||||
command.Flags().IntVar(&concurrentApplicationUpdates, "concurrent-application-updates", env.ParseNumFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_CONCURRENT_APPLICATION_UPDATES", 1, 1, 200), "Number of concurrent Application create/update/delete operations per ApplicationSet reconcile.")
|
||||
|
||||
return &command
|
||||
}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,28 @@
|
|||
package command
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewCommand_ConcurrentApplicationUpdatesFlag(t *testing.T) {
|
||||
cmd := NewCommand()
|
||||
|
||||
flag := cmd.Flags().Lookup("concurrent-application-updates")
|
||||
require.NotNil(t, flag, "expected --concurrent-application-updates flag to be registered")
|
||||
assert.Equal(t, "int", flag.Value.Type())
|
||||
assert.Equal(t, "1", flag.DefValue, "default should be 1")
|
||||
}
|
||||
|
||||
func TestNewCommand_ConcurrentApplicationUpdatesFlagValue(t *testing.T) {
|
||||
cmd := NewCommand()
|
||||
|
||||
err := cmd.Flags().Set("concurrent-application-updates", "5")
|
||||
require.NoError(t, err)
|
||||
|
||||
val, err := cmd.Flags().GetInt("concurrent-application-updates")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 5, val)
|
||||
}
|
||||
|
|
@ -34,6 +34,7 @@ import (
|
|||
"github.com/argoproj/argo-cd/v3/util/dex"
|
||||
"github.com/argoproj/argo-cd/v3/util/env"
|
||||
"github.com/argoproj/argo-cd/v3/util/errors"
|
||||
utilglob "github.com/argoproj/argo-cd/v3/util/glob"
|
||||
"github.com/argoproj/argo-cd/v3/util/kube"
|
||||
"github.com/argoproj/argo-cd/v3/util/templates"
|
||||
"github.com/argoproj/argo-cd/v3/util/tls"
|
||||
|
|
@ -87,6 +88,7 @@ func NewCommand() *cobra.Command {
|
|||
applicationNamespaces []string
|
||||
enableProxyExtension bool
|
||||
webhookParallelism int
|
||||
globCacheSize int
|
||||
hydratorEnabled bool
|
||||
syncWithReplaceAllowed bool
|
||||
|
||||
|
|
@ -122,6 +124,7 @@ func NewCommand() *cobra.Command {
|
|||
cli.SetLogFormat(cmdutil.LogFormat)
|
||||
cli.SetLogLevel(cmdutil.LogLevel)
|
||||
cli.SetGLogLevel(glogLevel)
|
||||
utilglob.SetCacheSize(globCacheSize)
|
||||
|
||||
// Recover from panic and log the error using the configured logger instead of the default.
|
||||
defer func() {
|
||||
|
|
@ -326,6 +329,7 @@ func NewCommand() *cobra.Command {
|
|||
command.Flags().StringSliceVar(&applicationNamespaces, "application-namespaces", env.StringsFromEnv("ARGOCD_APPLICATION_NAMESPACES", []string{}, ","), "List of additional namespaces where application resources can be managed in")
|
||||
command.Flags().BoolVar(&enableProxyExtension, "enable-proxy-extension", env.ParseBoolFromEnv("ARGOCD_SERVER_ENABLE_PROXY_EXTENSION", false), "Enable Proxy Extension feature")
|
||||
command.Flags().IntVar(&webhookParallelism, "webhook-parallelism-limit", env.ParseNumFromEnv("ARGOCD_SERVER_WEBHOOK_PARALLELISM_LIMIT", 50, 1, 1000), "Number of webhook requests processed concurrently")
|
||||
command.Flags().IntVar(&globCacheSize, "glob-cache-size", env.ParseNumFromEnv("ARGOCD_SERVER_GLOB_CACHE_SIZE", utilglob.DefaultGlobCacheSize, 1, math.MaxInt32), "Maximum number of compiled glob patterns to cache for RBAC evaluation")
|
||||
command.Flags().StringSliceVar(&enableK8sEvent, "enable-k8s-event", env.StringsFromEnv("ARGOCD_ENABLE_K8S_EVENT", argo.DefaultEnableEventList(), ","), "Enable ArgoCD to use k8s event. For disabling all events, set the value as `none`. (e.g --enable-k8s-event=none), For enabling specific events, set the value as `event reason`. (e.g --enable-k8s-event=StatusRefreshed,ResourceCreated)")
|
||||
command.Flags().BoolVar(&hydratorEnabled, "hydrator-enabled", env.ParseBoolFromEnv("ARGOCD_HYDRATOR_ENABLED", false), "Feature flag to enable Hydrator. Default (\"false\")")
|
||||
command.Flags().BoolVar(&syncWithReplaceAllowed, "sync-with-replace-allowed", env.ParseBoolFromEnv("ARGOCD_SYNC_WITH_REPLACE_ALLOWED", true), "Whether to allow users to select replace for syncs from UI/CLI")
|
||||
|
|
|
|||
|
|
@ -127,7 +127,7 @@ has appropriate RBAC permissions to change other accounts.
|
|||
|
||||
_, err := usrIf.UpdatePassword(ctx, &updatePasswordRequest)
|
||||
errors.CheckError(err)
|
||||
fmt.Printf("Password updated\n")
|
||||
fmt.Print("Password updated\n")
|
||||
|
||||
if account == "" || account == userInfo.Username {
|
||||
// Get a new JWT token after updating the password
|
||||
|
|
@ -254,7 +254,7 @@ func printAccountNames(accounts []*accountpkg.Account) {
|
|||
|
||||
func printAccountsTable(items []*accountpkg.Account) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "NAME\tENABLED\tCAPABILITIES\n")
|
||||
fmt.Fprint(w, "NAME\tENABLED\tCAPABILITIES\n")
|
||||
for _, a := range items {
|
||||
fmt.Fprintf(w, "%s\t%v\t%s\n", a.Name, a.Enabled, strings.Join(a.Capabilities, ", "))
|
||||
}
|
||||
|
|
@ -356,7 +356,7 @@ func printAccountDetails(acc *accountpkg.Account) {
|
|||
fmt.Println("NONE")
|
||||
} else {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "ID\tISSUED AT\tEXPIRING AT\n")
|
||||
fmt.Fprint(w, "ID\tISSUED AT\tEXPIRING AT\n")
|
||||
for _, t := range acc.Tokens {
|
||||
expiresAtFormatted := "never"
|
||||
if t.ExpiresAt > 0 {
|
||||
|
|
|
|||
|
|
@ -240,7 +240,7 @@ func printStatsSummary(clusters []ClusterWithInfo) {
|
|||
|
||||
avgResourcesByShard := totalResourcesCount / int64(len(resourcesCountByShard))
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "SHARD\tRESOURCES COUNT\n")
|
||||
_, _ = fmt.Fprint(w, "SHARD\tRESOURCES COUNT\n")
|
||||
for shard := 0; shard < len(resourcesCountByShard); shard++ {
|
||||
cnt := resourcesCountByShard[shard]
|
||||
percent := (float64(cnt) / float64(avgResourcesByShard)) * 100.0
|
||||
|
|
@ -318,7 +318,7 @@ func NewClusterNamespacesCommand() *cobra.Command {
|
|||
|
||||
err := runClusterNamespacesCommand(ctx, clientConfig, func(_ *versioned.Clientset, _ db.ArgoDB, clusters map[string][]string) error {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "CLUSTER\tNAMESPACES\n")
|
||||
_, _ = fmt.Fprint(w, "CLUSTER\tNAMESPACES\n")
|
||||
|
||||
for cluster, namespaces := range clusters {
|
||||
// print shortest namespace names first
|
||||
|
|
@ -495,7 +495,7 @@ argocd admin cluster stats target-cluster`,
|
|||
errors.CheckError(err)
|
||||
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "SERVER\tSHARD\tCONNECTION\tNAMESPACES COUNT\tAPPS COUNT\tRESOURCES COUNT\n")
|
||||
_, _ = fmt.Fprint(w, "SERVER\tSHARD\tCONNECTION\tNAMESPACES COUNT\tAPPS COUNT\tRESOURCES COUNT\n")
|
||||
for _, cluster := range clusters {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%d\t%s\t%d\t%d\t%d\n", cluster.Server, cluster.Shard, cluster.Info.ConnectionState.Status, len(cluster.Namespaces), cluster.Info.ApplicationsCount, cluster.Info.CacheInfo.ResourcesCount)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -149,6 +149,7 @@ func NewGenRepoSpecCommand() *cobra.Command {
|
|||
repoOpts.Repo.EnableOCI = repoOpts.EnableOci
|
||||
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
|
||||
repoOpts.Repo.InsecureOCIForceHttp = repoOpts.InsecureOCIForceHTTP
|
||||
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
|
||||
|
||||
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
|
||||
errors.CheckError(stderrors.New("must specify --name for repos of type 'helm'"))
|
||||
|
|
|
|||
|
|
@ -313,7 +313,7 @@ argocd admin settings validate --group accounts --group plugins --load-cluster-s
|
|||
_, _ = fmt.Fprintf(os.Stdout, "%s\n", logs)
|
||||
}
|
||||
if i != len(groups)-1 {
|
||||
_, _ = fmt.Fprintf(os.Stdout, "\n")
|
||||
_, _ = fmt.Fprint(os.Stdout, "\n")
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
@ -429,7 +429,7 @@ argocd admin settings resource-overrides ignore-differences ./deploy.yaml --argo
|
|||
return
|
||||
}
|
||||
|
||||
_, _ = fmt.Printf("Following fields are ignored:\n\n")
|
||||
_, _ = fmt.Print("Following fields are ignored:\n\n")
|
||||
_ = cli.PrintDiff(res.GetName(), &res, normalizedRes)
|
||||
})
|
||||
},
|
||||
|
|
@ -476,7 +476,7 @@ argocd admin settings resource-overrides ignore-resource-updates ./deploy.yaml -
|
|||
return
|
||||
}
|
||||
|
||||
_, _ = fmt.Printf("Following fields are ignored:\n\n")
|
||||
_, _ = fmt.Print("Following fields are ignored:\n\n")
|
||||
_ = cli.PrintDiff(res.GetName(), &res, normalizedRes)
|
||||
})
|
||||
},
|
||||
|
|
@ -551,7 +551,7 @@ argocd admin settings resource-overrides action list /tmp/deploy.yaml --argocd-c
|
|||
})
|
||||
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "NAME\tDISABLED\n")
|
||||
_, _ = fmt.Fprint(w, "NAME\tDISABLED\n")
|
||||
for _, action := range availableActions {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\n", action.Name, strconv.FormatBool(action.Disabled))
|
||||
}
|
||||
|
|
@ -622,7 +622,7 @@ argocd admin settings resource-overrides action /tmp/deploy.yaml restart --argoc
|
|||
return
|
||||
}
|
||||
|
||||
_, _ = fmt.Printf("Following fields have been changed:\n\n")
|
||||
_, _ = fmt.Print("Following fields have been changed:\n\n")
|
||||
_ = cli.PrintDiff(res.GetName(), &res, result)
|
||||
case lua.CreateOperation:
|
||||
yamlBytes, err := yaml.Marshal(impactedResource.UnstructuredObj)
|
||||
|
|
|
|||
|
|
@ -182,7 +182,7 @@ argocd admin settings rbac can someuser create application 'default/app' --defau
|
|||
// Exactly one of --namespace or --policy-file must be given.
|
||||
if (!nsOverride && policyFile == "") || (nsOverride && policyFile != "") {
|
||||
c.HelpFunc()(c, args)
|
||||
log.Fatalf("please provide exactly one of --policy-file or --namespace")
|
||||
log.Fatal("please provide exactly one of --policy-file or --namespace")
|
||||
}
|
||||
|
||||
restConfig, err := clientConfig.ClientConfig()
|
||||
|
|
@ -264,12 +264,12 @@ argocd admin settings rbac validate --namespace argocd
|
|||
|
||||
if len(args) > 0 {
|
||||
c.HelpFunc()(c, args)
|
||||
log.Fatalf("too many arguments")
|
||||
log.Fatal("too many arguments")
|
||||
}
|
||||
|
||||
if (namespace == "" && policyFile == "") || (namespace != "" && policyFile != "") {
|
||||
c.HelpFunc()(c, args)
|
||||
log.Fatalf("please provide exactly one of --policy-file or --namespace")
|
||||
log.Fatal("please provide exactly one of --policy-file or --namespace")
|
||||
}
|
||||
|
||||
restConfig, err := clientConfig.ClientConfig()
|
||||
|
|
@ -284,13 +284,13 @@ argocd admin settings rbac validate --namespace argocd
|
|||
userPolicy, _, _ := getPolicy(ctx, policyFile, realClientset, namespace)
|
||||
if userPolicy != "" {
|
||||
if err := rbac.ValidatePolicy(userPolicy); err == nil {
|
||||
fmt.Printf("Policy is valid.\n")
|
||||
fmt.Print("Policy is valid.\n")
|
||||
os.Exit(0)
|
||||
}
|
||||
fmt.Printf("Policy is invalid: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
log.Fatalf("Policy is empty or could not be loaded.")
|
||||
log.Fatal("Policy is empty or could not be loaded.")
|
||||
},
|
||||
}
|
||||
clientConfig = cli.AddKubectlFlagsToCmd(command)
|
||||
|
|
|
|||
|
|
@ -693,7 +693,7 @@ func printAppSummaryTable(app *argoappv1.Application, appURL string, windows *ar
|
|||
}
|
||||
|
||||
if deny || !deny && !allow && inactiveAllows {
|
||||
s, err := windows.CanSync(true)
|
||||
s, err := windows.CanSync(true, nil)
|
||||
if err == nil && s {
|
||||
status = "Manual Allowed"
|
||||
} else {
|
||||
|
|
@ -757,7 +757,7 @@ func printAppSourceDetails(appSrc *argoappv1.ApplicationSource) {
|
|||
}
|
||||
|
||||
func printAppConditions(w io.Writer, app *argoappv1.Application) {
|
||||
_, _ = fmt.Fprintf(w, "CONDITION\tMESSAGE\tLAST TRANSITION\n")
|
||||
_, _ = fmt.Fprint(w, "CONDITION\tMESSAGE\tLAST TRANSITION\n")
|
||||
for _, item := range app.Status.Conditions {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\n", item.Type, item.Message, item.LastTransitionTime)
|
||||
}
|
||||
|
|
@ -829,7 +829,7 @@ func printHelmParams(helm *argoappv1.ApplicationSourceHelm) {
|
|||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
if helm != nil {
|
||||
fmt.Println()
|
||||
_, _ = fmt.Fprintf(w, "NAME\tVALUE\n")
|
||||
_, _ = fmt.Fprint(w, "NAME\tVALUE\n")
|
||||
for _, p := range helm.Parameters {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\n", p.Name, truncateString(p.Value, paramLenLimit))
|
||||
}
|
||||
|
|
@ -1365,7 +1365,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
serverSideDiff = hasServerSideDiffAnnotation
|
||||
} else if serverSideDiff && !hasServerSideDiffAnnotation {
|
||||
// Flag explicitly set to true, but app annotation is not set
|
||||
fmt.Fprintf(os.Stderr, "Warning: Application does not have ServerSideDiff=true annotation.\n")
|
||||
fmt.Fprint(os.Stderr, "Warning: Application does not have ServerSideDiff=true annotation.\n")
|
||||
}
|
||||
|
||||
// Server side diff with local requires server side generate to be set as there will be a mismatch with client-generated manifests.
|
||||
|
|
@ -1418,7 +1418,7 @@ func NewApplicationDiffCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
|
||||
diffOption.serversideRes = res
|
||||
} else {
|
||||
fmt.Fprintf(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.")
|
||||
fmt.Fprint(os.Stderr, "Warning: local diff without --server-side-generate is deprecated and does not work with plugins. Server-side generation will be the default in v2.7.")
|
||||
conn, clusterIf := clientset.NewClusterClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
cluster, err := clusterIf.Get(ctx, &clusterpkg.ClusterQuery{Name: app.Spec.Destination.Name, Server: app.Spec.Destination.Server})
|
||||
|
|
@ -2104,7 +2104,7 @@ func NewApplicationWaitCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
|
||||
// printAppResources prints the resources of an application in a tabwriter table
|
||||
func printAppResources(w io.Writer, app *argoappv1.Application) {
|
||||
_, _ = fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
|
||||
_, _ = fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tSTATUS\tHEALTH\tHOOK\tMESSAGE\n")
|
||||
for _, res := range getResourceStates(app, nil) {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n", res.Group, res.Kind, res.Namespace, res.Name, res.Status, res.Health, res.Hook, res.Message)
|
||||
}
|
||||
|
|
@ -2112,7 +2112,7 @@ func printAppResources(w io.Writer, app *argoappv1.Application) {
|
|||
|
||||
func printTreeView(nodeMapping map[string]argoappv1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, mapNodeNameToResourceState map[string]*resourceState) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "KIND/NAME\tSTATUS\tHEALTH\tMESSAGE\n")
|
||||
_, _ = fmt.Fprint(w, "KIND/NAME\tSTATUS\tHEALTH\tMESSAGE\n")
|
||||
for uid := range parentNodes {
|
||||
treeViewAppGet("", nodeMapping, parentChildMapping, nodeMapping[uid], mapNodeNameToResourceState, w)
|
||||
}
|
||||
|
|
@ -2121,7 +2121,7 @@ func printTreeView(nodeMapping map[string]argoappv1.ResourceNode, parentChildMap
|
|||
|
||||
func printTreeViewDetailed(nodeMapping map[string]argoappv1.ResourceNode, parentChildMapping map[string][]string, parentNodes map[string]struct{}, mapNodeNameToResourceState map[string]*resourceState) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "KIND/NAME\tSTATUS\tHEALTH\tAGE\tMESSAGE\tREASON\n")
|
||||
fmt.Fprint(w, "KIND/NAME\tSTATUS\tHEALTH\tAGE\tMESSAGE\tREASON\n")
|
||||
for uid := range parentNodes {
|
||||
detailedTreeViewAppGet("", nodeMapping, parentChildMapping, nodeMapping[uid], mapNodeNameToResourceState, w)
|
||||
}
|
||||
|
|
@ -2334,7 +2334,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
|
||||
if app.Spec.HasMultipleSources() {
|
||||
if revision != "" {
|
||||
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-position instead.")
|
||||
log.Fatal("argocd cli does not work on multi-source app with --revision flag. Use --revisions and --source-positions instead.")
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -2453,7 +2453,7 @@ func NewApplicationSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
|
||||
foundDiffs = findAndPrintDiff(ctx, app, proj.Project, resources, argoSettings, diffOption, ignoreNormalizerOpts, serverSideDiff, appIf, appName, appNs, serverSideDiffConcurrency, serverSideDiffMaxBatchKB)
|
||||
if !foundDiffs {
|
||||
fmt.Printf("====== No Differences found ======\n")
|
||||
fmt.Print("====== No Differences found ======\n")
|
||||
// if no differences found, then no need to sync
|
||||
return
|
||||
}
|
||||
|
|
@ -2973,7 +2973,7 @@ func setParameterOverrides(app *argoappv1.Application, parameters []string, sour
|
|||
source.Helm.AddParameter(*newParam)
|
||||
}
|
||||
default:
|
||||
log.Fatalf("Parameters can only be set against Helm applications")
|
||||
log.Fatal("Parameters can only be set against Helm applications")
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -3028,13 +3028,13 @@ func printApplicationHistoryTable(revHistory []argoappv1.RevisionHistory) {
|
|||
}
|
||||
for i, key := range varHistoryKeys {
|
||||
_, _ = fmt.Fprintf(w, "SOURCE\t%s\n", key)
|
||||
_, _ = fmt.Fprintf(w, "ID\tDATE\tREVISION\n")
|
||||
_, _ = fmt.Fprint(w, "ID\tDATE\tREVISION\n")
|
||||
for _, history := range varHistory[key] {
|
||||
_, _ = fmt.Fprintf(w, "%d\t%s\t%s\n", history.id, history.date, history.revision)
|
||||
}
|
||||
// Add a newline if it's not the last iteration
|
||||
if i < len(varHistoryKeys)-1 {
|
||||
_, _ = fmt.Fprintf(w, "\n")
|
||||
_, _ = fmt.Fprint(w, "\n")
|
||||
}
|
||||
}
|
||||
_ = w.Flush()
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ func NewApplicationResourceActionsListCommand(clientOpts *argocdclient.ClientOpt
|
|||
fmt.Println(string(jsonBytes))
|
||||
case "":
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "GROUP\tKIND\tNAME\tACTION\tDISABLED\n")
|
||||
fmt.Fprint(w, "GROUP\tKIND\tNAME\tACTION\tDISABLED\n")
|
||||
for _, action := range availableActions {
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", action.Group, action.Kind, action.Name, action.Action, strconv.FormatBool(action.Disabled))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import (
|
|||
"strings"
|
||||
"text/tabwriter"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
"go.yaml.in/yaml/v3"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/util/templates"
|
||||
|
||||
|
|
@ -217,9 +217,9 @@ func reconstructObject(extracted []any, fields []string, depth int) map[string]a
|
|||
func printManifests(objs *[]unstructured.Unstructured, filteredFields bool, showName bool, output string) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
if showName {
|
||||
fmt.Fprintf(w, "FIELD\tRESOURCE NAME\tVALUE\n")
|
||||
fmt.Fprint(w, "FIELD\tRESOURCE NAME\tVALUE\n")
|
||||
} else {
|
||||
fmt.Fprintf(w, "FIELD\tVALUE\n")
|
||||
fmt.Fprint(w, "FIELD\tVALUE\n")
|
||||
}
|
||||
|
||||
for i, o := range *objs {
|
||||
|
|
@ -479,7 +479,7 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
|
|||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
switch output {
|
||||
case "tree=detailed":
|
||||
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\tAGE\tHEALTH\tREASON\n")
|
||||
fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\tAGE\tHEALTH\tREASON\n")
|
||||
|
||||
if !orphaned || listAll {
|
||||
mapUIDToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)
|
||||
|
|
@ -491,7 +491,7 @@ func printResources(listAll bool, orphaned bool, appResourceTree *v1alpha1.Appli
|
|||
printDetailedTreeViewAppResourcesOrphaned(mapUIDToNode, mapParentToChild, parentNode, w)
|
||||
}
|
||||
case "tree":
|
||||
fmt.Fprintf(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\n")
|
||||
fmt.Fprint(w, "GROUP\tKIND\tNAMESPACE\tNAME\tORPHANED\n")
|
||||
|
||||
if !orphaned || listAll {
|
||||
mapUIDToNode, mapParentToChild, parentNode := parentChildInfo(appResourceTree.Nodes)
|
||||
|
|
|
|||
|
|
@ -40,6 +40,10 @@ var appSetExample = templates.Examples(`
|
|||
|
||||
# Delete an ApplicationSet
|
||||
argocd appset delete APPSETNAME (APPSETNAME...)
|
||||
|
||||
# Namespace precedence for --appset-namespace (-N):
|
||||
# - get/delete: if the argument is namespace/name, that namespace wins; -N is ignored.
|
||||
# - create/generate: metadata.namespace in the YAML wins when set; -N applies only when the manifest omits namespace.
|
||||
`)
|
||||
|
||||
// NewAppSetCommand returns a new instance of an `argocd appset` command
|
||||
|
|
@ -64,8 +68,9 @@ func NewAppSetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
// NewApplicationSetGetCommand returns a new instance of an `argocd appset get` command
|
||||
func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var (
|
||||
output string
|
||||
showParams bool
|
||||
output string
|
||||
showParams bool
|
||||
appSetNamespace string
|
||||
)
|
||||
command := &cobra.Command{
|
||||
Use: "get APPSETNAME",
|
||||
|
|
@ -73,6 +78,13 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
|||
Example: templates.Examples(`
|
||||
# Get ApplicationSets
|
||||
argocd appset get APPSETNAME
|
||||
|
||||
# Get ApplicationSet in a specific namespace using qualified name (namespace/name)
|
||||
argocd appset get APPSET_NAMESPACE/APPSETNAME
|
||||
|
||||
# Get ApplicationSet in a specific namespace using --appset-namespace flag
|
||||
argocd appset get --appset-namespace=APPSET_NAMESPACE APPSETNAME
|
||||
|
||||
`),
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
|
@ -85,7 +97,7 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
|||
conn, appIf := acdClient.NewApplicationSetClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
|
||||
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], "")
|
||||
appSetName, appSetNs := argo.ParseFromQualifiedName(args[0], appSetNamespace)
|
||||
|
||||
appSet, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appSetName, AppsetNamespace: appSetNs})
|
||||
errors.CheckError(err)
|
||||
|
|
@ -113,6 +125,7 @@ func NewApplicationSetGetCommand(clientOpts *argocdclient.ClientOptions) *cobra.
|
|||
}
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
|
||||
command.Flags().BoolVar(&showParams, "show-params", false, "Show ApplicationSet parameters and overrides")
|
||||
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Only get ApplicationSet from a namespace (ignored when qualified name is provided)")
|
||||
return command
|
||||
}
|
||||
|
||||
|
|
@ -121,6 +134,7 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
var (
|
||||
output string
|
||||
upsert, dryRun, wait bool
|
||||
appSetNamespace string
|
||||
)
|
||||
command := &cobra.Command{
|
||||
Use: "create",
|
||||
|
|
@ -129,6 +143,9 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
# Create ApplicationSets
|
||||
argocd appset create <filename or URL> (<filename or URL>...)
|
||||
|
||||
# Create ApplicationSet in a specific namespace using
|
||||
argocd appset create --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
|
||||
|
||||
# Dry-run AppSet creation to see what applications would be managed
|
||||
argocd appset create --dry-run <filename or URL> -o json | jq -r '.status.resources[].name'
|
||||
`),
|
||||
|
|
@ -145,7 +162,7 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
errors.CheckError(err)
|
||||
|
||||
if len(appsets) == 0 {
|
||||
fmt.Printf("No ApplicationSets found while parsing the input file")
|
||||
fmt.Print("No ApplicationSets found while parsing the input file")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
|
|
@ -157,6 +174,11 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
|
||||
if appset.Namespace == "" && appSetNamespace != "" {
|
||||
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
|
||||
appset.Namespace = appSetNamespace
|
||||
}
|
||||
|
||||
// Get app before creating to see if it is being updated or no change
|
||||
existing, err := appIf.Get(ctx, &applicationset.ApplicationSetGetQuery{Name: appset.Name, AppsetNamespace: appset.Namespace})
|
||||
if grpc.UnwrapGRPCStatus(err).Code() != codes.NotFound {
|
||||
|
|
@ -218,18 +240,23 @@ func NewApplicationSetCreateCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
command.Flags().BoolVar(&dryRun, "dry-run", false, "Allows to evaluate the ApplicationSet template on the server to get a preview of the applications that would be created")
|
||||
command.Flags().BoolVar(&wait, "wait", false, "Wait until the ApplicationSet's resources are up to date. Will block indefinitely if the ApplicationSet has errors")
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
|
||||
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be created in (ignored when provided YAML file has namespace set in metadata)")
|
||||
return command
|
||||
}
|
||||
|
||||
// NewApplicationSetGenerateCommand returns a new instance of an `argocd appset generate` command
|
||||
func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var output string
|
||||
var appSetNamespace string
|
||||
command := &cobra.Command{
|
||||
Use: "generate",
|
||||
Short: "Generate apps of ApplicationSet rendered templates",
|
||||
Example: templates.Examples(`
|
||||
# Generate apps of ApplicationSet rendered templates
|
||||
argocd appset generate <filename or URL> (<filename or URL>...)
|
||||
|
||||
# Generate apps of ApplicationSet rendered templates in a specific namespace
|
||||
argocd appset generate --appset-namespace=APPSET_NAMESPACE <filename or URL> (<filename or URL>...)
|
||||
`),
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
|
@ -244,7 +271,7 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
|
|||
errors.CheckError(err)
|
||||
|
||||
if len(appsets) != 1 {
|
||||
fmt.Printf("Input file must contain one ApplicationSet")
|
||||
fmt.Print("Input file must contain one ApplicationSet")
|
||||
os.Exit(1)
|
||||
}
|
||||
appset := appsets[0]
|
||||
|
|
@ -252,6 +279,11 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
|
|||
errors.Fatal(errors.ErrorGeneric, fmt.Sprintf("Error generating apps for ApplicationSet %s. ApplicationSet does not have Name field set", appset))
|
||||
}
|
||||
|
||||
if appset.Namespace == "" && appSetNamespace != "" {
|
||||
fmt.Printf("ApplicationSet YAML file does not have namespace; using --appset-namespace=%q.\n", appSetNamespace)
|
||||
appset.Namespace = appSetNamespace
|
||||
}
|
||||
|
||||
conn, appIf := argocdClient.NewApplicationSetClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
|
||||
|
|
@ -286,6 +318,7 @@ func NewApplicationSetGenerateCommand(clientOpts *argocdclient.ClientOptions) *c
|
|||
},
|
||||
}
|
||||
command.Flags().StringVarP(&output, "output", "o", "wide", "Output format. One of: json|yaml|wide")
|
||||
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace used for generating Applications (ignored when provided YAML file has namespace set in metadata)")
|
||||
return command
|
||||
}
|
||||
|
||||
|
|
@ -338,8 +371,9 @@ func NewApplicationSetListCommand(clientOpts *argocdclient.ClientOptions) *cobra
|
|||
// NewApplicationSetDeleteCommand returns a new instance of an `argocd appset delete` command
|
||||
func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
var (
|
||||
noPrompt bool
|
||||
wait bool
|
||||
noPrompt bool
|
||||
wait bool
|
||||
appSetNamespace string
|
||||
)
|
||||
command := &cobra.Command{
|
||||
Use: "delete",
|
||||
|
|
@ -347,6 +381,12 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
Example: templates.Examples(`
|
||||
# Delete an applicationset
|
||||
argocd appset delete APPSETNAME (APPSETNAME...)
|
||||
|
||||
# Delete ApplicationSet in a specific namespace using qualified name (namespace/name)
|
||||
argocd appset delete APPSET_NAMESPACE/APPSETNAME
|
||||
|
||||
# Delete ApplicationSet in a specific namespace using --appset-namespace flag
|
||||
argocd appset delete --appset-namespace=APPSET_NAMESPACE APPSETNAME
|
||||
`),
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
|
@ -375,7 +415,7 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
promptUtil := utils.NewPrompt(isTerminal && !noPrompt)
|
||||
|
||||
for _, appSetQualifiedName := range args {
|
||||
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, "")
|
||||
appSetName, appSetNs := argo.ParseFromQualifiedName(appSetQualifiedName, appSetNamespace)
|
||||
|
||||
appsetDeleteReq := applicationset.ApplicationSetDeleteRequest{
|
||||
Name: appSetName,
|
||||
|
|
@ -412,6 +452,7 @@ func NewApplicationSetDeleteCommand(clientOpts *argocdclient.ClientOptions) *cob
|
|||
}
|
||||
command.Flags().BoolVarP(&noPrompt, "yes", "y", false, "Turn off prompting to confirm cascaded deletion of Application resources")
|
||||
command.Flags().BoolVar(&wait, "wait", false, "Wait until deletion of the applicationset(s) completes")
|
||||
command.Flags().StringVarP(&appSetNamespace, "appset-namespace", "N", "", "Namespace where the ApplicationSet will be deleted from (ignored when qualified name is provided)")
|
||||
return command
|
||||
}
|
||||
|
||||
|
|
@ -503,7 +544,7 @@ func printAppSetSummaryTable(appSet *arogappsetv1.ApplicationSet) {
|
|||
}
|
||||
|
||||
func printAppSetConditions(w io.Writer, appSet *arogappsetv1.ApplicationSet) {
|
||||
_, _ = fmt.Fprintf(w, "CONDITION\tSTATUS\tMESSAGE\tLAST TRANSITION\n")
|
||||
_, _ = fmt.Fprint(w, "CONDITION\tSTATUS\tMESSAGE\tLAST TRANSITION\n")
|
||||
for _, item := range appSet.Status.Conditions {
|
||||
_, _ = fmt.Fprintf(w, "%s\t%s\t%s\t%s\n", item.Type, item.Status, item.Message, item.LastTransitionTime)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -352,7 +352,7 @@ func NewCertListCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
// Print table of certificate info
|
||||
func printCertTable(certs []appsv1.RepositoryCertificate, sortOrder string) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
|
||||
fmt.Fprint(w, "HOSTNAME\tTYPE\tSUBTYPE\tINFO\n")
|
||||
|
||||
switch sortOrder {
|
||||
case "hostname", "":
|
||||
|
|
|
|||
|
|
@ -377,15 +377,15 @@ func formatNamespaces(cluster argoappv1.Cluster) string {
|
|||
|
||||
func printClusterDetails(clusters []argoappv1.Cluster) {
|
||||
for _, cluster := range clusters {
|
||||
fmt.Printf("Cluster information\n\n")
|
||||
fmt.Print("Cluster information\n\n")
|
||||
fmt.Printf(" Server URL: %s\n", cluster.Server)
|
||||
fmt.Printf(" Server Name: %s\n", strWithDefault(cluster.Name, "-"))
|
||||
fmt.Printf(" Server Version: %s\n", cluster.Info.ServerVersion)
|
||||
fmt.Printf(" Namespaces: %s\n", formatNamespaces(cluster))
|
||||
fmt.Printf("\nTLS configuration\n\n")
|
||||
fmt.Print("\nTLS configuration\n\n")
|
||||
fmt.Printf(" Client cert: %v\n", len(cluster.Config.CertData) != 0)
|
||||
fmt.Printf(" Cert validation: %v\n", !cluster.Config.Insecure)
|
||||
fmt.Printf("\nAuthentication\n\n")
|
||||
fmt.Print("\nAuthentication\n\n")
|
||||
fmt.Printf(" Basic authentication: %v\n", cluster.Config.Username != "")
|
||||
fmt.Printf(" oAuth authentication: %v\n", cluster.Config.BearerToken != "")
|
||||
fmt.Printf(" AWS authentication: %v\n", cluster.Config.AWSAuthConfig != nil)
|
||||
|
|
@ -468,7 +468,7 @@ argocd cluster rm cluster-name`,
|
|||
// Print table of cluster information
|
||||
func printClusterTable(clusters []argoappv1.Cluster) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
_, _ = fmt.Fprint(w, "SERVER\tNAME\tVERSION\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
for _, c := range clusters {
|
||||
server := c.Server
|
||||
if len(c.Namespaces) > 0 {
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ func NewGPGAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
if len(resp.Skipped) > 0 {
|
||||
fmt.Printf(", and %d key(s) were skipped because they exist already", len(resp.Skipped))
|
||||
}
|
||||
fmt.Printf(".\n")
|
||||
fmt.Print(".\n")
|
||||
},
|
||||
}
|
||||
command.Flags().StringVarP(&fromFile, "from", "f", "", "Path to the file that contains the GPG public key to import")
|
||||
|
|
@ -192,7 +192,7 @@ func NewGPGDeleteCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
|
|||
// Print table of certificate info
|
||||
func printKeyTable(keys []appsv1.GnuPGPublicKey) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "KEYID\tTYPE\tIDENTITY\n")
|
||||
fmt.Fprint(w, "KEYID\tTYPE\tIDENTITY\n")
|
||||
|
||||
for _, k := range keys {
|
||||
fmt.Fprintf(w, "%s\t%s\t%s\n", k.KeyID, strings.ToUpper(k.SubType), k.Owner)
|
||||
|
|
|
|||
|
|
@ -274,7 +274,7 @@ func oauth2Login(
|
|||
// flow where the id_token is contained in a URL fragment, making it inaccessible to be
|
||||
// read from the request. This javascript will redirect the browser to send the
|
||||
// fragments as query parameters so our callback handler can read and return token.
|
||||
fmt.Fprintf(w, `<script>window.location.search = window.location.hash.substring(1)</script>`)
|
||||
fmt.Fprint(w, `<script>window.location.search = window.location.hash.substring(1)</script>`)
|
||||
return
|
||||
}
|
||||
|
||||
|
|
@ -351,7 +351,7 @@ func oauth2Login(
|
|||
if errMsg != "" {
|
||||
log.Fatal(errMsg)
|
||||
}
|
||||
fmt.Printf("Authentication successful\n")
|
||||
fmt.Print("Authentication successful\n")
|
||||
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
|
||||
defer cancel()
|
||||
_ = srv.Shutdown(ctx)
|
||||
|
|
@ -375,7 +375,7 @@ func passwordLogin(ctx context.Context, acdClient argocdclient.Client, username,
|
|||
|
||||
func ssoAuthFlow(url string, ssoLaunchBrowser bool) {
|
||||
if ssoLaunchBrowser {
|
||||
fmt.Printf("Opening system default browser for authentication\n")
|
||||
fmt.Print("Opening system default browser for authentication\n")
|
||||
err := open.Start(url)
|
||||
errors.CheckError(err)
|
||||
} else {
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ argocd logout cd.argoproj.io
|
|||
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
|
||||
errutil.CheckError(err)
|
||||
if localCfg == nil {
|
||||
log.Fatalf("Nothing to logout from")
|
||||
log.Fatal("Nothing to logout from")
|
||||
}
|
||||
|
||||
promptUtil := utils.NewPrompt(clientOpts.PromptsEnabled)
|
||||
|
|
|
|||
|
|
@ -493,7 +493,7 @@ func NewProjectAddSourceCommand(clientOpts *argocdclient.ClientOptions) *cobra.C
|
|||
|
||||
for _, item := range proj.Spec.SourceRepos {
|
||||
if item == "*" {
|
||||
fmt.Printf("Source repository '*' already allowed in project\n")
|
||||
fmt.Print("Source repository '*' already allowed in project\n")
|
||||
return
|
||||
}
|
||||
if git.SameURL(item, url) {
|
||||
|
|
@ -535,7 +535,7 @@ func NewProjectAddSourceNamespace(clientOpts *argocdclient.ClientOptions) *cobra
|
|||
|
||||
for _, item := range proj.Spec.SourceNamespaces {
|
||||
if item == "*" || item == srcNamespace {
|
||||
fmt.Printf("Source namespace '*' already allowed in project\n")
|
||||
fmt.Print("Source namespace '*' already allowed in project\n")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
|
@ -868,7 +868,7 @@ func printProjectNames(projects []v1alpha1.AppProject) {
|
|||
// Print table of project info
|
||||
func printProjectTable(projects []v1alpha1.AppProject) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
|
||||
fmt.Fprint(w, "NAME\tDESCRIPTION\tDESTINATIONS\tSOURCES\tCLUSTER-RESOURCE-WHITELIST\tNAMESPACE-RESOURCE-BLACKLIST\tSIGNATURE-KEYS\tORPHANED-RESOURCES\tDESTINATION-SERVICE-ACCOUNTS\n")
|
||||
for _, p := range projects {
|
||||
printProjectLine(w, &p)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -421,7 +421,7 @@ fa9d3517-c52d-434c-9bff-215b38508842 2023-10-08T11:08:18+01:00 Never
|
|||
}
|
||||
|
||||
writer := tabwriter.NewWriter(os.Stdout, 0, 0, 4, ' ', 0)
|
||||
_, err = fmt.Fprintf(writer, "ID\tISSUED AT\tEXPIRES AT\n")
|
||||
_, err = fmt.Fprint(writer, "ID\tISSUED AT\tEXPIRES AT\n")
|
||||
errors.CheckError(err)
|
||||
|
||||
tokenRowFormat := "%s\t%v\t%v\n"
|
||||
|
|
@ -515,7 +515,7 @@ func printProjectRoleListName(roles []v1alpha1.ProjectRole) {
|
|||
// Print table of project roles
|
||||
func printProjectRoleListTable(roles []v1alpha1.ProjectRole) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "ROLE-NAME\tDESCRIPTION\n")
|
||||
fmt.Fprint(w, "ROLE-NAME\tDESCRIPTION\n")
|
||||
for _, role := range roles {
|
||||
fmt.Fprintf(w, "%s\t%s\n", role.Name, role.Description)
|
||||
}
|
||||
|
|
@ -603,9 +603,9 @@ ID ISSUED-AT EXPIRES-AT
|
|||
printRoleFmtStr := "%-15s%s\n"
|
||||
fmt.Printf(printRoleFmtStr, "Role Name:", roleName)
|
||||
fmt.Printf(printRoleFmtStr, "Description:", role.Description)
|
||||
fmt.Printf("Policies:\n")
|
||||
fmt.Print("Policies:\n")
|
||||
fmt.Printf("%s\n", proj.ProjectPoliciesString())
|
||||
fmt.Printf("Groups:\n")
|
||||
fmt.Print("Groups:\n")
|
||||
// if the group exists in the role
|
||||
// range over each group and print it
|
||||
if v1alpha1.RoleGroupExists(role) {
|
||||
|
|
@ -615,9 +615,9 @@ ID ISSUED-AT EXPIRES-AT
|
|||
} else {
|
||||
fmt.Println("<none>")
|
||||
}
|
||||
fmt.Printf("JWT Tokens:\n")
|
||||
fmt.Print("JWT Tokens:\n")
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
|
||||
fmt.Fprint(w, "ID\tISSUED-AT\tEXPIRES-AT\n")
|
||||
for _, token := range proj.Status.JWTTokensByRole[roleName].Items {
|
||||
expiresAt := "<none>"
|
||||
if token.ExpiresAt > 0 {
|
||||
|
|
|
|||
|
|
@ -42,6 +42,8 @@ argocd proj windows list <project-name>`,
|
|||
}
|
||||
roleCommand.AddCommand(NewProjectWindowsDisableManualSyncCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsEnableManualSyncCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsDisableSyncOverrunCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsEnableSyncOverrunCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsAddWindowCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsDeleteCommand(clientOpts))
|
||||
roleCommand.AddCommand(NewProjectWindowsListCommand(clientOpts))
|
||||
|
|
@ -49,18 +51,13 @@ argocd proj windows list <project-name>`,
|
|||
return roleCommand
|
||||
}
|
||||
|
||||
// NewProjectWindowsDisableManualSyncCommand returns a new instance of an `argocd proj windows disable-manual-sync` command
|
||||
func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
command := &cobra.Command{
|
||||
Use: "disable-manual-sync PROJECT ID",
|
||||
Short: "Disable manual sync for a sync window",
|
||||
Long: "Disable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
Example: `
|
||||
#Disable manual sync for a sync window for the Project
|
||||
argocd proj windows disable-manual-sync PROJECT ID
|
||||
|
||||
#Disabling manual sync for a windows set on the default project with Id 0
|
||||
argocd proj windows disable-manual-sync default 0`,
|
||||
// newProjectWindowsToggleCommand creates a command for toggling a boolean field on a sync window
|
||||
func newProjectWindowsToggleCommand(clientOpts *argocdclient.ClientOptions, use, short, long, example string, updateFn func(*v1alpha1.SyncWindow)) *cobra.Command {
|
||||
return &cobra.Command{
|
||||
Use: use,
|
||||
Short: short,
|
||||
Long: long,
|
||||
Example: example,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
||||
|
|
@ -79,26 +76,51 @@ argocd proj windows disable-manual-sync default 0`,
|
|||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
|
||||
found := false
|
||||
for i, window := range proj.Spec.SyncWindows {
|
||||
if id == i {
|
||||
window.ManualSync = false
|
||||
updateFn(window)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
errors.CheckError(fmt.Errorf("window with id '%d' not found", id))
|
||||
}
|
||||
|
||||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
},
|
||||
}
|
||||
return command
|
||||
}
|
||||
|
||||
// NewProjectWindowsDisableManualSyncCommand returns a new instance of an `argocd proj windows disable-manual-sync` command
|
||||
func NewProjectWindowsDisableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
return newProjectWindowsToggleCommand(
|
||||
clientOpts,
|
||||
"disable-manual-sync PROJECT ID",
|
||||
"Disable manual sync for a sync window",
|
||||
"Disable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
`
|
||||
#Disable manual sync for a sync window for the Project
|
||||
argocd proj windows disable-manual-sync PROJECT ID
|
||||
|
||||
#Disabling manual sync for a windows set on the default project with Id 0
|
||||
argocd proj windows disable-manual-sync default 0`,
|
||||
func(window *v1alpha1.SyncWindow) {
|
||||
window.ManualSync = false
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// NewProjectWindowsEnableManualSyncCommand returns a new instance of an `argocd proj windows enable-manual-sync` command
|
||||
func NewProjectWindowsEnableManualSyncCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
command := &cobra.Command{
|
||||
Use: "enable-manual-sync PROJECT ID",
|
||||
Short: "Enable manual sync for a sync window",
|
||||
Long: "Enable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
Example: `
|
||||
return newProjectWindowsToggleCommand(
|
||||
clientOpts,
|
||||
"enable-manual-sync PROJECT ID",
|
||||
"Enable manual sync for a sync window",
|
||||
"Enable manual sync for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
`
|
||||
#Enabling manual sync for a general case
|
||||
argocd proj windows enable-manual-sync PROJECT ID
|
||||
|
||||
|
|
@ -107,35 +129,48 @@ argocd proj windows enable-manual-sync default 2
|
|||
|
||||
#Enabling manual sync with a custom message
|
||||
argocd proj windows enable-manual-sync my-app-project --message "Manual sync initiated by admin`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
||||
if len(args) != 2 {
|
||||
c.HelpFunc()(c, args)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
projName := args[0]
|
||||
id, err := strconv.Atoi(args[1])
|
||||
errors.CheckError(err)
|
||||
|
||||
conn, projIf := headless.NewClientOrDie(clientOpts, c).NewProjectClientOrDie()
|
||||
defer utilio.Close(conn)
|
||||
|
||||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
|
||||
for i, window := range proj.Spec.SyncWindows {
|
||||
if id == i {
|
||||
window.ManualSync = true
|
||||
}
|
||||
}
|
||||
|
||||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
func(window *v1alpha1.SyncWindow) {
|
||||
window.ManualSync = true
|
||||
},
|
||||
}
|
||||
return command
|
||||
)
|
||||
}
|
||||
|
||||
// NewProjectWindowsDisableSyncOverrunCommand returns a new instance of an `argocd proj windows disable-sync-overrun` command
|
||||
func NewProjectWindowsDisableSyncOverrunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
return newProjectWindowsToggleCommand(
|
||||
clientOpts,
|
||||
"disable-sync-overrun PROJECT ID",
|
||||
"Disable sync overrun for a sync window",
|
||||
"Disable sync overrun for a sync window. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
`
|
||||
#Disable sync overrun for a sync window for the Project
|
||||
argocd proj windows disable-sync-overrun PROJECT ID
|
||||
|
||||
#Disabling sync overrun for a window set on the default project with Id 0
|
||||
argocd proj windows disable-sync-overrun default 0`,
|
||||
func(window *v1alpha1.SyncWindow) {
|
||||
window.SyncOverrun = false
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// NewProjectWindowsEnableSyncOverrunCommand returns a new instance of an `argocd proj windows enable-sync-overrun` command
|
||||
func NewProjectWindowsEnableSyncOverrunCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
||||
return newProjectWindowsToggleCommand(
|
||||
clientOpts,
|
||||
"enable-sync-overrun PROJECT ID",
|
||||
"Enable sync overrun for a sync window",
|
||||
"Enable sync overrun for a sync window. When enabled on a deny window, syncs that started before the deny window will be allowed to continue. When enabled on an allow window, syncs that started during the allow window can continue after the window ends. Requires ID which can be found by running \"argocd proj windows list PROJECT\"",
|
||||
`
|
||||
#Enable sync overrun for a sync window
|
||||
argocd proj windows enable-sync-overrun PROJECT ID
|
||||
|
||||
#Enabling sync overrun for a window set on the default project with Id 2
|
||||
argocd proj windows enable-sync-overrun default 2`,
|
||||
func(window *v1alpha1.SyncWindow) {
|
||||
window.SyncOverrun = true
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
// NewProjectWindowsAddWindowCommand returns a new instance of an `argocd proj windows add` command
|
||||
|
|
@ -148,6 +183,7 @@ func NewProjectWindowsAddWindowCommand(clientOpts *argocdclient.ClientOptions) *
|
|||
namespaces []string
|
||||
clusters []string
|
||||
manualSync bool
|
||||
syncOverrun bool
|
||||
timeZone string
|
||||
andOperator bool
|
||||
description string
|
||||
|
|
@ -164,7 +200,7 @@ argocd proj windows add PROJECT \
|
|||
--applications "*" \
|
||||
--description "Ticket 123"
|
||||
|
||||
#Add a deny sync window with the ability to manually sync.
|
||||
#Add a deny sync window with the ability to manually sync and sync overrun.
|
||||
argocd proj windows add PROJECT \
|
||||
--kind deny \
|
||||
--schedule "30 10 * * *" \
|
||||
|
|
@ -173,8 +209,8 @@ argocd proj windows add PROJECT \
|
|||
--namespaces "default,\\*-prod" \
|
||||
--clusters "prod,staging" \
|
||||
--manual-sync \
|
||||
--description "Ticket 123"
|
||||
`,
|
||||
--sync-overrun \
|
||||
--description "Ticket 123"`,
|
||||
Run: func(c *cobra.Command, args []string) {
|
||||
ctx := c.Context()
|
||||
|
||||
|
|
@ -189,7 +225,7 @@ argocd proj windows add PROJECT \
|
|||
proj, err := projIf.Get(ctx, &projectpkg.ProjectQuery{Name: projName})
|
||||
errors.CheckError(err)
|
||||
|
||||
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone, andOperator, description)
|
||||
err = proj.Spec.AddWindow(kind, schedule, duration, applications, namespaces, clusters, manualSync, timeZone, andOperator, description, syncOverrun)
|
||||
errors.CheckError(err)
|
||||
|
||||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
|
|
@ -203,6 +239,7 @@ argocd proj windows add PROJECT \
|
|||
command.Flags().StringSliceVar(&namespaces, "namespaces", []string{}, "Namespaces that the schedule will be applied to. Comma separated, wildcards supported (e.g. --namespaces default,\\*-prod)")
|
||||
command.Flags().StringSliceVar(&clusters, "clusters", []string{}, "Clusters that the schedule will be applied to. Comma separated, wildcards supported (e.g. --clusters prod,staging)")
|
||||
command.Flags().BoolVar(&manualSync, "manual-sync", false, "Allow manual syncs for both deny and allow windows")
|
||||
command.Flags().BoolVar(&syncOverrun, "sync-overrun", false, "Allow syncs to continue: for deny windows, syncs that started before the window; for allow windows, syncs that started during the window")
|
||||
command.Flags().StringVar(&timeZone, "time-zone", "UTC", "Time zone of the sync window")
|
||||
command.Flags().BoolVar(&andOperator, "use-and-operator", false, "Use AND operator for matching applications, namespaces and clusters instead of the default OR operator")
|
||||
command.Flags().StringVar(&description, "description", "", `Sync window description`)
|
||||
|
|
@ -248,7 +285,7 @@ argocd proj windows delete new-project 1`,
|
|||
_, err = projIf.Update(ctx, &projectpkg.ProjectUpdateRequest{Project: proj})
|
||||
errors.CheckError(err)
|
||||
} else {
|
||||
fmt.Printf("The command to delete the sync window was cancelled\n")
|
||||
fmt.Print("The command to delete the sync window was cancelled\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
|
|
@ -362,7 +399,7 @@ argocd proj windows list test-project`,
|
|||
func printSyncWindows(proj *v1alpha1.AppProject) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
var fmtStr string
|
||||
headers := []any{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "TIMEZONE", "USEANDOPERATOR"}
|
||||
headers := []any{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"}
|
||||
fmtStr = strings.Repeat("%s\t", len(headers)) + "\n"
|
||||
fmt.Fprintf(w, fmtStr, headers...)
|
||||
if proj.Spec.SyncWindows.HasWindows() {
|
||||
|
|
@ -378,6 +415,7 @@ func printSyncWindows(proj *v1alpha1.AppProject) {
|
|||
formatListOutput(window.Namespaces),
|
||||
formatListOutput(window.Clusters),
|
||||
formatBoolEnabledOutput(window.ManualSync),
|
||||
formatBoolEnabledOutput(window.SyncOverrun),
|
||||
window.TimeZone,
|
||||
formatBoolEnabledOutput(window.UseAndOperator),
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,6 +1,11 @@
|
|||
package commands
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
|
@ -11,30 +16,229 @@ import (
|
|||
)
|
||||
|
||||
func TestPrintSyncWindows(t *testing.T) {
|
||||
proj := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{Name: "test-project"},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SyncWindows: v1alpha1.SyncWindows{
|
||||
{
|
||||
Kind: "allow",
|
||||
Schedule: "* * * * *",
|
||||
Duration: "1h",
|
||||
Applications: []string{"app1"},
|
||||
Namespaces: []string{"ns1"},
|
||||
Clusters: []string{"cluster1"},
|
||||
ManualSync: true,
|
||||
UseAndOperator: true,
|
||||
tests := []struct {
|
||||
name string
|
||||
project *v1alpha1.AppProject
|
||||
expectedHeader []string
|
||||
expectedRows [][]string
|
||||
}{
|
||||
{
|
||||
name: "Project with multiple sync windows including syncOverrun",
|
||||
project: &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-project",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SyncWindows: v1alpha1.SyncWindows{
|
||||
{
|
||||
Kind: "allow",
|
||||
Schedule: "0 0 * * *",
|
||||
Duration: "1h",
|
||||
Applications: []string{"app1", "app2"},
|
||||
Namespaces: []string{"default"},
|
||||
Clusters: []string{"cluster1"},
|
||||
ManualSync: false,
|
||||
SyncOverrun: false,
|
||||
TimeZone: "UTC",
|
||||
UseAndOperator: false,
|
||||
},
|
||||
{
|
||||
Kind: "deny",
|
||||
Schedule: "0 12 * * *",
|
||||
Duration: "2h",
|
||||
Applications: []string{"*"},
|
||||
Namespaces: []string{"production"},
|
||||
Clusters: []string{"*"},
|
||||
ManualSync: true,
|
||||
SyncOverrun: true,
|
||||
TimeZone: "America/New_York",
|
||||
UseAndOperator: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
|
||||
expectedRows: [][]string{
|
||||
{"0", "Inactive", "allow", "0 0 * * *", "1h", "app1,app2", "default", "cluster1", "Disabled", "Disabled", "UTC", "Disabled"},
|
||||
{"1", "Inactive", "deny", "0 12 * * *", "2h", "*", "production", "*", "Enabled", "Enabled", "America/New_York", "Enabled"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Project with empty sync window lists",
|
||||
project: &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-project",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SyncWindows: v1alpha1.SyncWindows{
|
||||
{
|
||||
Kind: "allow",
|
||||
Schedule: "0 1 * * *",
|
||||
Duration: "30m",
|
||||
Applications: []string{},
|
||||
Namespaces: []string{},
|
||||
Clusters: []string{},
|
||||
ManualSync: false,
|
||||
SyncOverrun: false,
|
||||
TimeZone: "UTC",
|
||||
UseAndOperator: false,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
|
||||
expectedRows: [][]string{
|
||||
{"0", "Inactive", "allow", "0 1 * * *", "30m", "-", "-", "-", "Disabled", "Disabled", "UTC", "Disabled"},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "Project with no sync windows",
|
||||
project: &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-project",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
SyncWindows: v1alpha1.SyncWindows{},
|
||||
},
|
||||
},
|
||||
expectedHeader: []string{"ID", "STATUS", "KIND", "SCHEDULE", "DURATION", "APPLICATIONS", "NAMESPACES", "CLUSTERS", "MANUALSYNC", "SYNCOVERRUN", "TIMEZONE", "USEANDOPERATOR"},
|
||||
expectedRows: [][]string{},
|
||||
},
|
||||
}
|
||||
|
||||
output, err := captureOutput(func() error {
|
||||
printSyncWindows(proj)
|
||||
return nil
|
||||
})
|
||||
require.NoError(t, err)
|
||||
t.Log(output)
|
||||
assert.Contains(t, output, "ID STATUS KIND SCHEDULE DURATION APPLICATIONS NAMESPACES CLUSTERS MANUALSYNC TIMEZONE USEANDOPERATOR")
|
||||
assert.Contains(t, output, "0 Active allow * * * * * 1h app1 ns1 cluster1 Enabled Enabled")
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Capture stdout
|
||||
oldStdout := os.Stdout
|
||||
r, w, _ := os.Pipe()
|
||||
os.Stdout = w
|
||||
|
||||
// Call the function
|
||||
printSyncWindows(tt.project)
|
||||
|
||||
// Restore stdout
|
||||
w.Close()
|
||||
os.Stdout = oldStdout
|
||||
|
||||
// Read captured output
|
||||
var buf bytes.Buffer
|
||||
_, err := io.Copy(&buf, r)
|
||||
require.NoError(t, err)
|
||||
output := buf.String()
|
||||
|
||||
// Parse the table output
|
||||
lines := strings.Split(strings.TrimSpace(output), "\n")
|
||||
assert.GreaterOrEqual(t, len(lines), 1, "Should have at least a header line")
|
||||
|
||||
// Parse header line (split by whitespace for headers since they don't contain spaces)
|
||||
headerLine := lines[0]
|
||||
headerFields := strings.Fields(headerLine)
|
||||
assert.Len(t, headerFields, len(tt.expectedHeader), "Header should have correct number of columns")
|
||||
assert.Equal(t, tt.expectedHeader, headerFields, "Header columns should match expected")
|
||||
|
||||
// Parse data rows
|
||||
dataLines := lines[1:]
|
||||
assert.Len(t, dataLines, len(tt.expectedRows), "Should have expected number of data rows")
|
||||
|
||||
for i, dataLine := range dataLines {
|
||||
// Split by 2 or more spaces (tabwriter output uses multiple spaces as separators)
|
||||
re := regexp.MustCompile(`\s{2,}`)
|
||||
fields := re.Split(strings.TrimSpace(dataLine), -1)
|
||||
|
||||
assert.Len(t, fields, len(tt.expectedRows[i]), "Row %d should have correct number of columns", i)
|
||||
|
||||
for j, expectedValue := range tt.expectedRows[i] {
|
||||
assert.Equal(t, expectedValue, fields[j], "Row %d, column %d should match expected value", i, j)
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatListOutput(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input []string
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "Empty list",
|
||||
input: []string{},
|
||||
expected: "-",
|
||||
},
|
||||
{
|
||||
name: "Single item",
|
||||
input: []string{"app1"},
|
||||
expected: "app1",
|
||||
},
|
||||
{
|
||||
name: "Multiple items",
|
||||
input: []string{"app1", "app2", "app3"},
|
||||
expected: "app1,app2,app3",
|
||||
},
|
||||
{
|
||||
name: "Wildcard",
|
||||
input: []string{"*"},
|
||||
expected: "*",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := formatListOutput(tt.input)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatBoolOutput(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input bool
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "Active",
|
||||
input: true,
|
||||
expected: "Active",
|
||||
},
|
||||
{
|
||||
name: "Inactive",
|
||||
input: false,
|
||||
expected: "Inactive",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := formatBoolOutput(tt.input)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestFormatBoolEnabledOutput(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
input bool
|
||||
expected string
|
||||
}{
|
||||
{
|
||||
name: "Enabled",
|
||||
input: true,
|
||||
expected: "Enabled",
|
||||
},
|
||||
{
|
||||
name: "Disabled",
|
||||
input: false,
|
||||
expected: "Disabled",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
result := formatBoolEnabledOutput(tt.input)
|
||||
assert.Equal(t, tt.expected, result)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ func NewReloginCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
localCfg, err := localconfig.ReadLocalConfig(clientOpts.ConfigPath)
|
||||
errors.CheckError(err)
|
||||
if localCfg == nil {
|
||||
log.Fatalf("No context found. Login using `argocd login`")
|
||||
log.Fatal("No context found. Login using `argocd login`")
|
||||
}
|
||||
configCtx, err := localCfg.ResolveContext(localCfg.CurrentContext)
|
||||
errors.CheckError(err)
|
||||
|
|
|
|||
|
|
@ -102,6 +102,12 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
|
||||
# Add a private Git repository on Google Cloud Sources via GCP service account credentials
|
||||
argocd repo add https://source.developers.google.com/p/my-google-cloud-project/r/my-repo --gcp-service-account-key-path service-account-key.json
|
||||
|
||||
# Add a private Git repository on Azure Devops via Azure Service Principal credentials
|
||||
argocd repo add https://dev.azure.com/my-devops-organization/my-devops-project/_git/my-devops-repo --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012
|
||||
|
||||
# Add a private Git repository on Azure Devops via Azure Service Principal credentials when not using default Azure public cloud
|
||||
argocd repo add https://dev.azure.com/my-devops-organization/my-devops-project/_git/my-devops-repo --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012 --azure-active-directory-endpoint https://login.microsoftonline.de
|
||||
`
|
||||
|
||||
command := &cobra.Command{
|
||||
|
|
@ -191,7 +197,12 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
repoOpts.Repo.NoProxy = repoOpts.NoProxy
|
||||
repoOpts.Repo.ForceHttpBasicAuth = repoOpts.ForceHttpBasicAuth
|
||||
repoOpts.Repo.UseAzureWorkloadIdentity = repoOpts.UseAzureWorkloadIdentity
|
||||
repoOpts.Repo.AzureServicePrincipalTenantId = repoOpts.AzureServicePrincipalTenantId
|
||||
repoOpts.Repo.AzureServicePrincipalClientId = repoOpts.AzureServicePrincipalClientId
|
||||
repoOpts.Repo.AzureServicePrincipalClientSecret = repoOpts.AzureServicePrincipalClientSecret
|
||||
repoOpts.Repo.AzureActiveDirectoryEndpoint = repoOpts.AzureActiveDirectoryEndpoint
|
||||
repoOpts.Repo.Depth = repoOpts.Depth
|
||||
repoOpts.Repo.WebhookManifestCacheWarmDisabled = repoOpts.WebhookManifestCacheWarmDisabled
|
||||
|
||||
if repoOpts.Repo.Type == "helm" && repoOpts.Repo.Name == "" {
|
||||
errors.Fatal(errors.ErrorGeneric, "Must specify --name for repos of type 'helm'")
|
||||
|
|
@ -225,27 +236,31 @@ func NewRepoAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command {
|
|||
// are high that we do not have the given URL pointing to a valid Git
|
||||
// repo anyway.
|
||||
repoAccessReq := repositorypkg.RepoAccessQuery{
|
||||
Repo: repoOpts.Repo.Repo,
|
||||
Type: repoOpts.Repo.Type,
|
||||
Name: repoOpts.Repo.Name,
|
||||
Username: repoOpts.Repo.Username,
|
||||
Password: repoOpts.Repo.Password,
|
||||
BearerToken: repoOpts.Repo.BearerToken,
|
||||
SshPrivateKey: repoOpts.Repo.SSHPrivateKey,
|
||||
TlsClientCertData: repoOpts.Repo.TLSClientCertData,
|
||||
TlsClientCertKey: repoOpts.Repo.TLSClientCertKey,
|
||||
Insecure: repoOpts.Repo.IsInsecure(),
|
||||
EnableOci: repoOpts.Repo.EnableOCI,
|
||||
GithubAppPrivateKey: repoOpts.Repo.GithubAppPrivateKey,
|
||||
GithubAppID: repoOpts.Repo.GithubAppId,
|
||||
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
|
||||
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
|
||||
Proxy: repoOpts.Proxy,
|
||||
Project: repoOpts.Repo.Project,
|
||||
GcpServiceAccountKey: repoOpts.Repo.GCPServiceAccountKey,
|
||||
ForceHttpBasicAuth: repoOpts.Repo.ForceHttpBasicAuth,
|
||||
UseAzureWorkloadIdentity: repoOpts.Repo.UseAzureWorkloadIdentity,
|
||||
InsecureOciForceHttp: repoOpts.Repo.InsecureOCIForceHttp,
|
||||
Repo: repoOpts.Repo.Repo,
|
||||
Type: repoOpts.Repo.Type,
|
||||
Name: repoOpts.Repo.Name,
|
||||
Username: repoOpts.Repo.Username,
|
||||
Password: repoOpts.Repo.Password,
|
||||
BearerToken: repoOpts.Repo.BearerToken,
|
||||
SshPrivateKey: repoOpts.Repo.SSHPrivateKey,
|
||||
TlsClientCertData: repoOpts.Repo.TLSClientCertData,
|
||||
TlsClientCertKey: repoOpts.Repo.TLSClientCertKey,
|
||||
Insecure: repoOpts.Repo.IsInsecure(),
|
||||
EnableOci: repoOpts.Repo.EnableOCI,
|
||||
GithubAppPrivateKey: repoOpts.Repo.GithubAppPrivateKey,
|
||||
GithubAppID: repoOpts.Repo.GithubAppId,
|
||||
GithubAppInstallationID: repoOpts.Repo.GithubAppInstallationId,
|
||||
GithubAppEnterpriseBaseUrl: repoOpts.Repo.GitHubAppEnterpriseBaseURL,
|
||||
Proxy: repoOpts.Proxy,
|
||||
Project: repoOpts.Repo.Project,
|
||||
GcpServiceAccountKey: repoOpts.Repo.GCPServiceAccountKey,
|
||||
ForceHttpBasicAuth: repoOpts.Repo.ForceHttpBasicAuth,
|
||||
UseAzureWorkloadIdentity: repoOpts.Repo.UseAzureWorkloadIdentity,
|
||||
InsecureOciForceHttp: repoOpts.Repo.InsecureOCIForceHttp,
|
||||
AzureServicePrincipalTenantId: repoOpts.Repo.AzureServicePrincipalTenantId,
|
||||
AzureServicePrincipalClientId: repoOpts.Repo.AzureServicePrincipalClientId,
|
||||
AzureServicePrincipalClientSecret: repoOpts.Repo.AzureServicePrincipalClientSecret,
|
||||
AzureActiveDirectoryEndpoint: repoOpts.Repo.AzureActiveDirectoryEndpoint,
|
||||
}
|
||||
_, err = repoIf.ValidateAccess(ctx, &repoAccessReq)
|
||||
errors.CheckError(err)
|
||||
|
|
@ -314,7 +329,7 @@ func NewRepoRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Command
|
|||
// Print table of repo info
|
||||
func printRepoTable(repos appsv1.Repositories) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
_, _ = fmt.Fprintf(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
_, _ = fmt.Fprint(w, "TYPE\tNAME\tREPO\tINSECURE\tOCI\tLFS\tCREDS\tSTATUS\tMESSAGE\tPROJECT\n")
|
||||
for _, r := range repos {
|
||||
var hasCreds string
|
||||
if r.InheritedCreds {
|
||||
|
|
|
|||
|
|
@ -83,6 +83,12 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
|||
|
||||
# Add credentials with GCP credentials for all repositories under https://source.developers.google.com/p/my-google-cloud-project/r/
|
||||
argocd repocreds add https://source.developers.google.com/p/my-google-cloud-project/r/ --gcp-service-account-key-path service-account-key.json
|
||||
|
||||
# Add credentials with Azure Service Principal to use for all repositories under https://dev.azure.com/my-devops-organization
|
||||
argocd repocreds add https://dev.azure.com/my-devops-organization --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012
|
||||
|
||||
# Add credentials with Azure Service Principal to use for all repositories under https://dev.azure.com/my-devops-organization when not using default Azure public cloud
|
||||
argocd repocreds add https://dev.azure.com/my-devops-organization --azure-service-principal-client-id 12345678-1234-1234-1234-123456789012 --azure-service-principal-client-secret test --azure-service-principal-tenant-id 12345678-1234-1234-1234-123456789012 --azure-active-directory-endpoint https://login.microsoftonline.de
|
||||
`
|
||||
|
||||
command := &cobra.Command{
|
||||
|
|
@ -201,6 +207,10 @@ func NewRepoCredsAddCommand(clientOpts *argocdclient.ClientOptions) *cobra.Comma
|
|||
command.Flags().BoolVar(&repo.ForceHttpBasicAuth, "force-http-basic-auth", false, "whether to force basic auth when connecting via HTTP")
|
||||
command.Flags().BoolVar(&repo.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
|
||||
command.Flags().StringVar(&repo.Proxy, "proxy-url", "", "If provided, this URL will be used to connect via proxy")
|
||||
command.Flags().StringVar(&repo.AzureServicePrincipalClientId, "azure-service-principal-client-id", "", "client id of the Azure Service Principal")
|
||||
command.Flags().StringVar(&repo.AzureServicePrincipalClientSecret, "azure-service-principal-client-secret", "", "client secret of the Azure Service Principal")
|
||||
command.Flags().StringVar(&repo.AzureServicePrincipalTenantId, "azure-service-principal-tenant-id", "", "tenant id of the Azure Service Principal")
|
||||
command.Flags().StringVar(&repo.AzureActiveDirectoryEndpoint, "azure-active-directory-endpoint", "", "Active Directory endpoint when not using default Azure public cloud (e.g. https://login.microsoftonline.de)")
|
||||
return command
|
||||
}
|
||||
|
||||
|
|
@ -243,7 +253,7 @@ func NewRepoCredsRemoveCommand(clientOpts *argocdclient.ClientOptions) *cobra.Co
|
|||
// Print the repository credentials as table
|
||||
func printRepoCredsTable(repos []appsv1.RepoCreds) {
|
||||
w := tabwriter.NewWriter(os.Stdout, 0, 0, 2, ' ', 0)
|
||||
fmt.Fprintf(w, "URL PATTERN\tUSERNAME\tSSH_CREDS\tTLS_CREDS\n")
|
||||
fmt.Fprint(w, "URL PATTERN\tUSERNAME\tSSH_CREDS\tTLS_CREDS\n")
|
||||
for _, r := range repos {
|
||||
if r.Username == "" {
|
||||
r.Username = "-"
|
||||
|
|
|
|||
|
|
@ -541,7 +541,7 @@ func SetParameterOverrides(app *argoappv1.Application, parameters []string, inde
|
|||
source.Helm.AddParameter(*newParam)
|
||||
}
|
||||
default:
|
||||
log.Fatalf("Parameters can only be set against Helm applications")
|
||||
log.Fatal("Parameters can only be set against Helm applications")
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ func TestReadAppSet(t *testing.T) {
|
|||
var appSets []*argoprojiov1alpha1.ApplicationSet
|
||||
err := readAppset([]byte(appSet), &appSets)
|
||||
if err != nil {
|
||||
t.Logf("Failed reading appset file")
|
||||
t.Log("Failed reading appset file")
|
||||
}
|
||||
assert.Len(t, appSets, 1)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -8,26 +8,31 @@ import (
|
|||
)
|
||||
|
||||
type RepoOptions struct {
|
||||
Repo appsv1.Repository
|
||||
Upsert bool
|
||||
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
|
||||
InsecureOCIForceHTTP bool
|
||||
InsecureIgnoreHostKey bool
|
||||
InsecureSkipServerVerification bool
|
||||
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
|
||||
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
|
||||
EnableLfs bool
|
||||
EnableOci bool
|
||||
GithubAppId int64
|
||||
GithubAppInstallationId int64
|
||||
GithubAppPrivateKeyPath string
|
||||
GitHubAppEnterpriseBaseURL string
|
||||
Proxy string
|
||||
NoProxy string
|
||||
GCPServiceAccountKeyPath string
|
||||
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
|
||||
UseAzureWorkloadIdentity bool
|
||||
Depth int64
|
||||
Repo appsv1.Repository
|
||||
Upsert bool
|
||||
SshPrivateKeyPath string //nolint:revive //FIXME(var-naming)
|
||||
InsecureOCIForceHTTP bool
|
||||
InsecureIgnoreHostKey bool
|
||||
InsecureSkipServerVerification bool
|
||||
TlsClientCertPath string //nolint:revive //FIXME(var-naming)
|
||||
TlsClientCertKeyPath string //nolint:revive //FIXME(var-naming)
|
||||
EnableLfs bool
|
||||
EnableOci bool
|
||||
GithubAppId int64
|
||||
GithubAppInstallationId int64
|
||||
GithubAppPrivateKeyPath string
|
||||
GitHubAppEnterpriseBaseURL string
|
||||
Proxy string
|
||||
NoProxy string
|
||||
GCPServiceAccountKeyPath string
|
||||
ForceHttpBasicAuth bool //nolint:revive //FIXME(var-naming)
|
||||
UseAzureWorkloadIdentity bool
|
||||
Depth int64
|
||||
WebhookManifestCacheWarmDisabled bool
|
||||
AzureServicePrincipalTenantId string
|
||||
AzureServicePrincipalClientId string
|
||||
AzureServicePrincipalClientSecret string
|
||||
AzureActiveDirectoryEndpoint string
|
||||
}
|
||||
|
||||
func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
||||
|
|
@ -55,4 +60,9 @@ func AddRepoFlags(command *cobra.Command, opts *RepoOptions) {
|
|||
command.Flags().BoolVar(&opts.UseAzureWorkloadIdentity, "use-azure-workload-identity", false, "whether to use azure workload identity for authentication")
|
||||
command.Flags().BoolVar(&opts.InsecureOCIForceHTTP, "insecure-oci-force-http", false, "Use http when accessing an OCI repository")
|
||||
command.Flags().Int64Var(&opts.Depth, "depth", 0, "Specify a custom depth for git clone operations. Unless specified, a full clone is performed using the depth of 0")
|
||||
command.Flags().BoolVar(&opts.WebhookManifestCacheWarmDisabled, "webhook-manifest-cache-warm-disabled", false, "disable manifest cache warming during webhook processing for this repository (recommended for large monorepos with plain YAML manifests)")
|
||||
command.Flags().StringVar(&opts.AzureServicePrincipalTenantId, "azure-service-principal-tenant-id", "", "tenant id of the Azure Service Principal")
|
||||
command.Flags().StringVar(&opts.AzureServicePrincipalClientId, "azure-service-principal-client-id", "", "client id of the Azure Service Principal")
|
||||
command.Flags().StringVar(&opts.AzureServicePrincipalClientSecret, "azure-service-principal-client-secret", "", "client secret of the Azure Service Principal")
|
||||
command.Flags().StringVar(&opts.AzureActiveDirectoryEndpoint, "azure-active-directory-endpoint", "", "Active Directory endpoint when not using default Azure public cloud (e.g. https://login.microsoftonline.de)")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import (
|
|||
|
||||
"github.com/Masterminds/sprig/v3"
|
||||
log "github.com/sirupsen/logrus"
|
||||
"gopkg.in/yaml.v3"
|
||||
"go.yaml.in/yaml/v3"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/commitserver/apiclient"
|
||||
|
|
@ -102,9 +102,6 @@ func WriteForPaths(root *os.Root, repoUrl, drySha string, dryCommitMetadata *app
|
|||
}
|
||||
}
|
||||
// if no manifest changes then skip commit
|
||||
if !atleastOneManifestChanged {
|
||||
return false, nil
|
||||
}
|
||||
return atleastOneManifestChanged, nil
|
||||
}
|
||||
|
||||
|
|
@ -140,11 +137,13 @@ func writeReadme(root *os.Root, dirPath string, metadata hydrator.HydratorCommit
|
|||
if err != nil && !os.IsExist(err) {
|
||||
return fmt.Errorf("failed to create README file: %w", err)
|
||||
}
|
||||
defer func() {
|
||||
err := readmeFile.Close()
|
||||
if err != nil {
|
||||
log.WithError(err).Error("failed to close README file")
|
||||
}
|
||||
}()
|
||||
err = readmeTemplate.Execute(readmeFile, metadata)
|
||||
closeErr := readmeFile.Close()
|
||||
if closeErr != nil {
|
||||
log.WithError(closeErr).Error("failed to close README file")
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to execute readme template: %w", err)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -137,6 +137,9 @@ const (
|
|||
ChangePasswordSSOTokenMaxAge = time.Minute * 5
|
||||
// GithubAppCredsExpirationDuration is the default time used to cache the GitHub app credentials
|
||||
GithubAppCredsExpirationDuration = time.Minute * 60
|
||||
// AzureServicePrincipalCredsExpirationDuration is the default time used to cache the Azure service principal credentials
|
||||
// SP tokens are valid for 60 minutes, so cache for 59 minutes to avoid issues with token expiration when taking the cleanup interval of 1 minute into account
|
||||
AzureServicePrincipalCredsExpirationDuration = time.Minute * 59
|
||||
|
||||
// PasswordPatten is the default password patten
|
||||
PasswordPatten = `^.{8,32}$`
|
||||
|
|
@ -297,6 +300,8 @@ const (
|
|||
EnvEnableGRPCTimeHistogramEnv = "ARGOCD_ENABLE_GRPC_TIME_HISTOGRAM"
|
||||
// EnvGithubAppCredsExpirationDuration controls the caching of Github app credentials. This value is in minutes (default: 60)
|
||||
EnvGithubAppCredsExpirationDuration = "ARGOCD_GITHUB_APP_CREDS_EXPIRATION_DURATION"
|
||||
// EnvAzureServicePrincipalCredsExpirationDuration controls the caching of Azure service principal credentials. This value is in minutes (default: 59). Any value greater than 59 will be set to 59 minutes
|
||||
EnvAzureServicePrincipalCredsExpirationDuration = "ARGOCD_AZURE_SERVICE_PRINCIPAL_CREDS_EXPIRATION_DURATION"
|
||||
// EnvHelmIndexCacheDuration controls how the helm repository index file is cached for (default: 0)
|
||||
EnvHelmIndexCacheDuration = "ARGOCD_HELM_INDEX_CACHE_DURATION"
|
||||
// EnvAppConfigPath allows to override the configuration path for repo server
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import (
|
|||
"fmt"
|
||||
"maps"
|
||||
"math"
|
||||
"math/rand"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"runtime/debug"
|
||||
|
|
@ -27,6 +26,7 @@ import (
|
|||
log "github.com/sirupsen/logrus"
|
||||
"golang.org/x/sync/semaphore"
|
||||
corev1 "k8s.io/api/core/v1"
|
||||
"k8s.io/apimachinery/pkg/api/equality"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
|
|
@ -125,7 +125,6 @@ type ApplicationController struct {
|
|||
stateCache statecache.LiveStateCache
|
||||
statusRefreshTimeout time.Duration
|
||||
statusHardRefreshTimeout time.Duration
|
||||
statusRefreshJitter time.Duration
|
||||
selfHealTimeout time.Duration
|
||||
selfHealBackoff *wait.Backoff
|
||||
syncTimeout time.Duration
|
||||
|
|
@ -202,7 +201,6 @@ func NewApplicationController(
|
|||
db: db,
|
||||
statusRefreshTimeout: appResyncPeriod,
|
||||
statusHardRefreshTimeout: appHardResyncPeriod,
|
||||
statusRefreshJitter: appResyncJitter,
|
||||
refreshRequestedApps: make(map[string]CompareWith),
|
||||
refreshRequestedAppsMutex: &sync.Mutex{},
|
||||
auditLogger: argo.NewAuditLogger(kubeClientset, namespace, common.CommandApplicationController, enableK8sEvent),
|
||||
|
|
@ -1016,17 +1014,54 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
|||
log.WithField("appkey", appKey).WithError(err).Error("Failed to get application from informer index")
|
||||
return processNext
|
||||
}
|
||||
|
||||
var app *appv1.Application
|
||||
var logCtx *log.Entry
|
||||
|
||||
if !exists {
|
||||
// This happens after app was deleted, but the work queue still had an entry for it.
|
||||
return processNext
|
||||
parts := strings.Split(appKey, "/")
|
||||
if len(parts) != 2 {
|
||||
log.WithField("appkey", appKey).Warn("Unexpected appKey format, expected namespace/name")
|
||||
return processNext
|
||||
}
|
||||
appNamespace, appName := parts[0], parts[1]
|
||||
freshApp, apiErr := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(appNamespace).Get(context.Background(), appName, metav1.GetOptions{})
|
||||
if apiErr != nil {
|
||||
if apierrors.IsNotFound(apiErr) {
|
||||
return processNext
|
||||
}
|
||||
log.WithField("appkey", appKey).WithError(apiErr).Error("Failed to retrieve application from API server")
|
||||
return processNext
|
||||
}
|
||||
if freshApp.Operation == nil {
|
||||
return processNext
|
||||
}
|
||||
app = freshApp
|
||||
logCtx = log.WithFields(applog.GetAppLogFields(app))
|
||||
} else {
|
||||
origApp, ok := obj.(*appv1.Application)
|
||||
if !ok {
|
||||
log.WithField("appkey", appKey).Warn("Key in index is not an application")
|
||||
return processNext
|
||||
}
|
||||
app = origApp.DeepCopy()
|
||||
logCtx = log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
if app.Operation != nil {
|
||||
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
if !apierrors.IsNotFound(err) {
|
||||
logCtx.WithError(err).Error("Failed to retrieve latest application state")
|
||||
}
|
||||
return processNext
|
||||
}
|
||||
if freshApp.Operation == nil {
|
||||
return processNext
|
||||
}
|
||||
app = freshApp
|
||||
}
|
||||
}
|
||||
origApp, ok := obj.(*appv1.Application)
|
||||
if !ok {
|
||||
log.WithField("appkey", appKey).Warn("Key in index is not an application")
|
||||
return processNext
|
||||
}
|
||||
app := origApp.DeepCopy()
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
ts := stats.NewTimingStats()
|
||||
defer func() {
|
||||
for k, v := range ts.Timings() {
|
||||
|
|
@ -1035,18 +1070,6 @@ func (ctrl *ApplicationController) processAppOperationQueueItem() (processNext b
|
|||
logCtx = logCtx.WithField("time_ms", time.Since(ts.StartTime).Milliseconds())
|
||||
logCtx.Debug("Finished processing app operation queue item")
|
||||
}()
|
||||
|
||||
if app.Operation != nil {
|
||||
// If we get here, we are about to process an operation, but we cannot rely on informer since it might have stale data.
|
||||
// So always retrieve the latest version to ensure it is not stale to avoid unnecessary syncing.
|
||||
// We cannot rely on informer since applications might be updated by both application controller and api server.
|
||||
freshApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.ObjectMeta.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
logCtx.WithError(err).Error("Failed to retrieve latest application state")
|
||||
return processNext
|
||||
}
|
||||
app = freshApp
|
||||
}
|
||||
ts.AddCheckpoint("get_fresh_app_ms")
|
||||
|
||||
if app.Operation != nil {
|
||||
|
|
@ -1773,7 +1796,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
|||
}
|
||||
}
|
||||
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
|
||||
return processNext
|
||||
}
|
||||
logCtx.Warnf("Failed to get cached managed resources for tree reconciliation, fall back to full reconciliation")
|
||||
|
|
@ -1787,7 +1810,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
|||
if hasErrors {
|
||||
app.Status.Sync.Status = appv1.SyncStatusCodeUnknown
|
||||
app.Status.Health.Status = health.HealthStatusUnknown
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
|
||||
|
||||
if err := ctrl.cache.SetAppResourcesTree(app.InstanceName(ctrl.namespace), &appv1.ApplicationTree{}); err != nil {
|
||||
logCtx.WithError(err).Warn("failed to set app resource tree")
|
||||
|
|
@ -1851,7 +1874,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
|||
logCtx = logCtx.WithField(k, v.Milliseconds())
|
||||
}
|
||||
|
||||
ctrl.normalizeApplication(origApp, app)
|
||||
ctrl.normalizeApplication(app)
|
||||
ts.AddCheckpoint("normalize_application_ms")
|
||||
|
||||
tree, err := ctrl.setAppManagedResources(destCluster, app, compareResult)
|
||||
|
|
@ -1862,7 +1885,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
|||
app.Status.Summary = tree.GetSummary(app)
|
||||
}
|
||||
|
||||
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false)
|
||||
canSync, _ := project.Spec.SyncWindows.Matches(app).CanSync(false, nil)
|
||||
if canSync {
|
||||
syncErrCond, opDuration := ctrl.autoSync(app, compareResult.syncStatus, compareResult.resources, compareResult.revisionsMayHaveChanges)
|
||||
setOpDuration = opDuration
|
||||
|
|
@ -1928,7 +1951,7 @@ func (ctrl *ApplicationController) processAppRefreshQueueItem() (processNext boo
|
|||
}
|
||||
}
|
||||
ts.AddCheckpoint("process_finalizers_ms")
|
||||
patchDuration = ctrl.persistAppStatus(origApp, &app.Status)
|
||||
patchDuration = ctrl.persistReconciliationStatus(origApp, &app.Status)
|
||||
// This is a partly a duplicate of patch_ms, but more descriptive and allows to have measurement for the next step.
|
||||
ts.AddCheckpoint("persist_app_status_ms")
|
||||
return processNext
|
||||
|
|
@ -2090,7 +2113,8 @@ func (ctrl *ApplicationController) refreshAppConditions(app *appv1.Application)
|
|||
}
|
||||
|
||||
// normalizeApplication normalizes an application.spec and additionally persists updates if it changed
|
||||
func (ctrl *ApplicationController) normalizeApplication(orig, app *appv1.Application) {
|
||||
func (ctrl *ApplicationController) normalizeApplication(app *appv1.Application) {
|
||||
orig := app.DeepCopy()
|
||||
app.Spec = *argo.NormalizeApplicationSpec(&app.Spec)
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
|
|
@ -2124,8 +2148,17 @@ func createMergePatch(orig, newV any) ([]byte, bool, error) {
|
|||
return patch, string(patch) != "{}", nil
|
||||
}
|
||||
|
||||
// persistAppStatus persists updates to application status. If no changes were made, it is a no-op
|
||||
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) (patchDuration time.Duration) {
|
||||
// persistReconciliationStatus persists updates to application status and consumes the refresh annotation.
|
||||
func (ctrl *ApplicationController) persistReconciliationStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus) time.Duration {
|
||||
newAnnotations := make(map[string]string)
|
||||
maps.Copy(newAnnotations, orig.GetAnnotations())
|
||||
delete(newAnnotations, appv1.AnnotationKeyRefresh)
|
||||
return ctrl.persistAppStatus(orig, newStatus, newAnnotations)
|
||||
}
|
||||
|
||||
// persistAppStatus persists updates to application status and optionally updates annotations.
|
||||
// If no changes were made, it is a no-op
|
||||
func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, newStatus *appv1.ApplicationStatus, newAnnotations map[string]string) (patchDuration time.Duration) {
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(orig))
|
||||
if orig.Status.Sync.Status != newStatus.Sync.Status {
|
||||
message := fmt.Sprintf("Updated sync status: %s -> %s", orig.Status.Sync.Status, newStatus.Sync.Status)
|
||||
|
|
@ -2143,13 +2176,6 @@ func (ctrl *ApplicationController) persistAppStatus(orig *appv1.Application, new
|
|||
// make sure the last transition time is the same and populated if the health is the same
|
||||
newStatus.Health.LastTransitionTime = orig.Status.Health.LastTransitionTime
|
||||
}
|
||||
var newAnnotations map[string]string
|
||||
if orig.GetAnnotations() != nil {
|
||||
newAnnotations = make(map[string]string)
|
||||
maps.Copy(newAnnotations, orig.GetAnnotations())
|
||||
delete(newAnnotations, appv1.AnnotationKeyRefresh)
|
||||
delete(newAnnotations, appv1.AnnotationKeyHydrate)
|
||||
}
|
||||
patch, modified, err := createMergePatch(
|
||||
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: orig.GetAnnotations()}, Status: orig.Status},
|
||||
&appv1.Application{ObjectMeta: metav1.ObjectMeta{Annotations: newAnnotations}, Status: *newStatus})
|
||||
|
|
@ -2319,7 +2345,7 @@ func (ctrl *ApplicationController) autoSync(app *appv1.Application, syncStatus *
|
|||
ctrl.writeBackToInformer(updatedApp)
|
||||
ts.AddCheckpoint("write_back_to_informer_ms")
|
||||
|
||||
message := fmt.Sprintf("Initiated automated sync to %s", desiredRevisions)
|
||||
message := fmt.Sprintf("Initiated automated sync to '%s'", strings.Join(desiredRevisions, ", "))
|
||||
ctrl.logAppEvent(context.TODO(), app, argo.EventInfo{Reason: argo.EventReasonOperationStarted, Type: corev1.EventTypeNormal}, message)
|
||||
logCtx.Info(message)
|
||||
return nil, setOpTime
|
||||
|
|
@ -2438,6 +2464,29 @@ func (ctrl *ApplicationController) canProcessApp(obj any) bool {
|
|||
return ctrl.clusterSharding.IsManagedCluster(destCluster)
|
||||
}
|
||||
|
||||
func operationChanged(oldApp, newApp *appv1.Application) bool {
|
||||
return (oldApp.Operation == nil && newApp.Operation != nil) ||
|
||||
(oldApp.Operation != nil && newApp.Operation != nil && !equality.Semantic.DeepEqual(oldApp.Operation, newApp.Operation))
|
||||
}
|
||||
|
||||
func deletionTimestampChanged(oldApp, newApp *appv1.Application) bool {
|
||||
return (oldApp.DeletionTimestamp == nil && newApp.DeletionTimestamp != nil) ||
|
||||
(oldApp.DeletionTimestamp != nil && newApp.DeletionTimestamp != nil && !oldApp.DeletionTimestamp.Equal(newApp.DeletionTimestamp))
|
||||
}
|
||||
|
||||
func isStatusOnlyUpdate(oldApp, newApp *appv1.Application) bool {
|
||||
if !equality.Semantic.DeepEqual(oldApp.Spec, newApp.Spec) {
|
||||
return false
|
||||
}
|
||||
if operationChanged(oldApp, newApp) {
|
||||
return false
|
||||
}
|
||||
if deletionTimestampChanged(oldApp, newApp) || newApp.DeletionTimestamp != nil {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.SharedIndexInformer, applisters.ApplicationLister) {
|
||||
watchNamespace := ctrl.namespace
|
||||
// If we have at least one additional namespace configured, we need to
|
||||
|
|
@ -2530,34 +2579,59 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
|
|||
}
|
||||
},
|
||||
UpdateFunc: func(old, new any) {
|
||||
if !ctrl.canProcessApp(new) {
|
||||
return
|
||||
}
|
||||
|
||||
key, err := cache.MetaNamespaceKeyFunc(new)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
oldApp, oldOK := old.(*appv1.Application)
|
||||
newApp, newOK := new.(*appv1.Application)
|
||||
|
||||
if !ctrl.canProcessApp(new) {
|
||||
return
|
||||
}
|
||||
|
||||
if newOK && newApp.Operation != nil {
|
||||
ctrl.appOperationQueue.AddRateLimited(key)
|
||||
}
|
||||
|
||||
var compareWith *CompareWith
|
||||
var delay *time.Duration
|
||||
|
||||
oldApp, oldOK := old.(*appv1.Application)
|
||||
newApp, newOK := new.(*appv1.Application)
|
||||
if oldOK && newOK {
|
||||
if oldApp.ResourceVersion == newApp.ResourceVersion {
|
||||
if ctrl.hydrator != nil {
|
||||
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
|
||||
}
|
||||
ctrl.clusterSharding.UpdateApp(newApp)
|
||||
return
|
||||
}
|
||||
|
||||
if isStatusOnlyUpdate(oldApp, newApp) {
|
||||
oldAnnotations := oldApp.GetAnnotations()
|
||||
newAnnotations := newApp.GetAnnotations()
|
||||
refreshAdded := (oldAnnotations == nil || oldAnnotations[appv1.AnnotationKeyRefresh] == "") &&
|
||||
(newAnnotations != nil && newAnnotations[appv1.AnnotationKeyRefresh] != "")
|
||||
hydrateAdded := (oldAnnotations == nil || oldAnnotations[appv1.AnnotationKeyHydrate] == "") &&
|
||||
(newAnnotations != nil && newAnnotations[appv1.AnnotationKeyHydrate] != "")
|
||||
|
||||
if !refreshAdded && !hydrateAdded {
|
||||
if ctrl.hydrator != nil {
|
||||
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
|
||||
}
|
||||
ctrl.clusterSharding.UpdateApp(newApp)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if automatedSyncEnabled(oldApp, newApp) {
|
||||
log.WithFields(applog.GetAppLogFields(newApp)).Info("Enabled automated sync")
|
||||
compareWith = CompareWithLatest.Pointer()
|
||||
}
|
||||
if ctrl.statusRefreshJitter != 0 && oldApp.ResourceVersion == newApp.ResourceVersion {
|
||||
// Handler is refreshing the apps, add a random jitter to spread the load and avoid spikes
|
||||
jitter := time.Duration(float64(ctrl.statusRefreshJitter) * rand.Float64())
|
||||
delay = &jitter
|
||||
}
|
||||
}
|
||||
|
||||
ctrl.requestAppRefresh(newApp.QualifiedName(), compareWith, delay)
|
||||
if !newOK || (delay != nil && *delay != time.Duration(0)) {
|
||||
if !newOK {
|
||||
ctrl.appOperationQueue.AddRateLimited(key)
|
||||
}
|
||||
if ctrl.hydrator != nil {
|
||||
|
|
@ -2570,7 +2644,7 @@ func (ctrl *ApplicationController) newApplicationInformerAndLister() (cache.Shar
|
|||
return
|
||||
}
|
||||
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
|
||||
// key function.
|
||||
// Key function.
|
||||
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
|
||||
if err == nil {
|
||||
// for deletes, we immediately add to the refresh queue
|
||||
|
|
@ -2689,7 +2763,7 @@ func (ctrl *ApplicationController) applyImpersonationConfig(config *rest.Config,
|
|||
if !impersonationEnabled {
|
||||
return nil
|
||||
}
|
||||
user, err := deriveServiceAccountToImpersonate(proj, app, destCluster)
|
||||
user, err := settings_util.DeriveServiceAccountToImpersonate(proj, app, destCluster)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error deriving service account to impersonate: %w", err)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import (
|
|||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"maps"
|
||||
"strconv"
|
||||
"testing"
|
||||
"time"
|
||||
|
|
@ -14,6 +15,7 @@ import (
|
|||
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube/kubetest"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/stretchr/testify/require"
|
||||
"k8s.io/apimachinery/pkg/api/equality"
|
||||
"k8s.io/apimachinery/pkg/api/resource"
|
||||
"k8s.io/apimachinery/pkg/labels"
|
||||
"k8s.io/apimachinery/pkg/util/wait"
|
||||
|
|
@ -662,8 +664,7 @@ func TestAutoSync(t *testing.T) {
|
|||
|
||||
func TestAutoSyncEnabledSetToTrue(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
enable := true
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(true)}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
|
|
@ -789,8 +790,7 @@ func TestSkipAutoSync(t *testing.T) {
|
|||
// Verify we skip when auto-sync is disabled
|
||||
t.Run("AutoSyncEnableFieldIsSetFalse", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
enable := false
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: &enable}
|
||||
app.Spec.SyncPolicy.Automated = &v1alpha1.SyncPolicyAutomated{Enabled: new(false)}
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
syncStatus := v1alpha1.SyncStatus{
|
||||
Status: v1alpha1.SyncStatusCodeOutOfSync,
|
||||
|
|
@ -1993,6 +1993,252 @@ func TestUnchangedManagedNamespaceMetadata(t *testing.T) {
|
|||
assert.Equal(t, CompareWithLatest, compareWith)
|
||||
}
|
||||
|
||||
func TestApplicationInformerUpdateFunc(t *testing.T) {
|
||||
// Test that UpdateFunc correctly handles:
|
||||
// 1. Status-only updates (no annotation) - should NOT trigger refresh
|
||||
// 2. Status-only updates WITH refresh annotation - should trigger refresh
|
||||
// 3. Spec changes - should trigger refresh
|
||||
// 4. Informer resync (same ResourceVersion) - should NOT trigger refresh
|
||||
|
||||
app := newFakeApp()
|
||||
app.Spec.Destination.Namespace = test.FakeArgoCDNamespace
|
||||
app.Spec.Destination.Server = v1alpha1.KubernetesInternalAPIServerAddr
|
||||
proj := defaultProj.DeepCopy()
|
||||
proj.Spec.SourceNamespaces = []string{test.FakeArgoCDNamespace}
|
||||
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app, proj}}, nil)
|
||||
|
||||
simulateUpdateFunc := func(oldApp, newApp *v1alpha1.Application) {
|
||||
if !ctrl.canProcessApp(newApp) {
|
||||
return
|
||||
}
|
||||
|
||||
key, err := cache.MetaNamespaceKeyFunc(newApp)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
var compareWith *CompareWith
|
||||
var delay *time.Duration
|
||||
|
||||
oldOK := oldApp != nil
|
||||
newOK := newApp != nil
|
||||
if oldOK && newOK {
|
||||
if oldApp.ResourceVersion == newApp.ResourceVersion {
|
||||
if ctrl.hydrator != nil {
|
||||
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
|
||||
}
|
||||
ctrl.clusterSharding.UpdateApp(newApp)
|
||||
return
|
||||
}
|
||||
|
||||
// Check if operation was added or changed - always process operations
|
||||
operationChanged := (oldApp.Operation == nil && newApp.Operation != nil) ||
|
||||
(oldApp.Operation != nil && newApp.Operation != nil && !equality.Semantic.DeepEqual(oldApp.Operation, newApp.Operation))
|
||||
|
||||
deletionTimestampChanged := (oldApp.DeletionTimestamp == nil && newApp.DeletionTimestamp != nil) ||
|
||||
(oldApp.DeletionTimestamp != nil && newApp.DeletionTimestamp != nil && !oldApp.DeletionTimestamp.Equal(newApp.DeletionTimestamp))
|
||||
appBeingDeleted := newApp.DeletionTimestamp != nil
|
||||
|
||||
if equality.Semantic.DeepEqual(oldApp.Spec, newApp.Spec) && !operationChanged && !deletionTimestampChanged && !appBeingDeleted {
|
||||
oldAnnotations := oldApp.GetAnnotations()
|
||||
newAnnotations := newApp.GetAnnotations()
|
||||
refreshAdded := (oldAnnotations == nil || oldAnnotations[v1alpha1.AnnotationKeyRefresh] == "") &&
|
||||
(newAnnotations != nil && newAnnotations[v1alpha1.AnnotationKeyRefresh] != "")
|
||||
hydrateAdded := (oldAnnotations == nil || oldAnnotations[v1alpha1.AnnotationKeyHydrate] == "") &&
|
||||
(newAnnotations != nil && newAnnotations[v1alpha1.AnnotationKeyHydrate] != "")
|
||||
|
||||
if !refreshAdded && !hydrateAdded {
|
||||
if ctrl.hydrator != nil {
|
||||
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
|
||||
}
|
||||
ctrl.clusterSharding.UpdateApp(newApp)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if automatedSyncEnabled(oldApp, newApp) {
|
||||
compareWith = CompareWithLatest.Pointer()
|
||||
}
|
||||
if compareWith == nil {
|
||||
compareWith = CompareWithRecent.Pointer()
|
||||
}
|
||||
}
|
||||
|
||||
ctrl.requestAppRefresh(newApp.QualifiedName(), compareWith, delay)
|
||||
if !newOK {
|
||||
ctrl.appOperationQueue.AddRateLimited(key)
|
||||
}
|
||||
if ctrl.hydrator != nil {
|
||||
ctrl.appHydrateQueue.AddRateLimited(newApp.QualifiedName())
|
||||
}
|
||||
ctrl.clusterSharding.UpdateApp(newApp)
|
||||
}
|
||||
|
||||
checkRefreshRequested := func(appName string, shouldBeRequested bool, msg string) {
|
||||
key := ctrl.toAppKey(appName)
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
_, isRequested := ctrl.refreshRequestedApps[key]
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
assert.Equal(t, shouldBeRequested, isRequested, "%s: Refresh request state mismatch for app %s (key: %s)", msg, appName, key)
|
||||
}
|
||||
|
||||
t.Run("Status-only update without annotation should NOT trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "1"
|
||||
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "2"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), false, "Status-only update without annotation")
|
||||
})
|
||||
|
||||
t.Run("Status-only update WITH refresh annotation SHOULD trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "3"
|
||||
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "4"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
if newApp.Annotations == nil {
|
||||
newApp.Annotations = make(map[string]string)
|
||||
}
|
||||
newApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH refresh annotation")
|
||||
})
|
||||
|
||||
t.Run("Status-only update WITH hydrate annotation SHOULD trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "5"
|
||||
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "6"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
if newApp.Annotations == nil {
|
||||
newApp.Annotations = make(map[string]string)
|
||||
}
|
||||
newApp.Annotations[v1alpha1.AnnotationKeyHydrate] = "true"
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH hydrate annotation")
|
||||
})
|
||||
|
||||
t.Run("Status-only update WITH both refresh and hydrate annotations SHOULD trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "7"
|
||||
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "8"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
if newApp.Annotations == nil {
|
||||
newApp.Annotations = make(map[string]string)
|
||||
}
|
||||
newApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
|
||||
newApp.Annotations[v1alpha1.AnnotationKeyHydrate] = "true"
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), true, "Status-only update WITH both refresh and hydrate annotations")
|
||||
})
|
||||
|
||||
t.Run("Status-only update with annotation REMOVAL should NOT trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "9"
|
||||
oldApp.Status.ReconciledAt = &metav1.Time{Time: time.Now().Add(-1 * time.Hour)}
|
||||
if oldApp.Annotations == nil {
|
||||
oldApp.Annotations = make(map[string]string)
|
||||
}
|
||||
oldApp.Annotations[v1alpha1.AnnotationKeyRefresh] = string(v1alpha1.RefreshTypeNormal)
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "10"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
delete(newApp.Annotations, v1alpha1.AnnotationKeyRefresh)
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), false, "Status-only update with annotation REMOVAL")
|
||||
})
|
||||
|
||||
t.Run("Spec change SHOULD trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "11"
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "12"
|
||||
newApp.Spec.Destination.Namespace = "different-namespace"
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), true, "Spec change")
|
||||
})
|
||||
|
||||
t.Run("Informer resync (same ResourceVersion) should NOT trigger refresh", func(_ *testing.T) {
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "13"
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "13"
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
checkRefreshRequested(app.QualifiedName(), false, "Informer resync")
|
||||
})
|
||||
|
||||
t.Run("DeletionTimestamp added SHOULD trigger refresh", func(_ *testing.T) {
|
||||
// Reset refresh state
|
||||
ctrl.refreshRequestedAppsMutex.Lock()
|
||||
ctrl.refreshRequestedApps = make(map[string]CompareWith)
|
||||
ctrl.refreshRequestedAppsMutex.Unlock()
|
||||
|
||||
oldApp := app.DeepCopy()
|
||||
oldApp.ResourceVersion = "14"
|
||||
oldApp.DeletionTimestamp = nil
|
||||
|
||||
newApp := oldApp.DeepCopy()
|
||||
newApp.ResourceVersion = "15"
|
||||
newApp.DeletionTimestamp = &metav1.Time{Time: time.Now()}
|
||||
newApp.Status.ReconciledAt = &metav1.Time{Time: time.Now()}
|
||||
|
||||
simulateUpdateFunc(oldApp, newApp)
|
||||
|
||||
checkRefreshRequested(app.QualifiedName(), true, "DeletionTimestamp added")
|
||||
})
|
||||
}
|
||||
|
||||
func TestRefreshAppConditions(t *testing.T) {
|
||||
defaultProj := v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
|
|
@ -3359,3 +3605,82 @@ func TestSelfHealRemainingBackoff(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestPersistAppStatus_AnnotationManagement(t *testing.T) {
|
||||
t.Run("persistReconciliationStatus deletes only refresh annotation", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
|
||||
"other-annotation": "other-value",
|
||||
}
|
||||
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
|
||||
app.Status.Health.Status = health.HealthStatusHealthy
|
||||
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
|
||||
origApp := app.DeepCopy()
|
||||
newStatus := app.Status.DeepCopy()
|
||||
|
||||
ctrl.persistReconciliationStatus(origApp, newStatus)
|
||||
|
||||
// Verify the patch was created correctly
|
||||
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Refresh annotation should be deleted
|
||||
_, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
|
||||
assert.False(t, hasRefresh, "refresh annotation should be deleted")
|
||||
|
||||
// Hydrate annotation should still exist
|
||||
hydrateValue, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
|
||||
assert.True(t, hasHydrate, "hydrate annotation should still exist")
|
||||
assert.Equal(t, string(v1alpha1.HydrateTypeNormal), hydrateValue)
|
||||
|
||||
// Other annotations should be preserved
|
||||
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
|
||||
assert.True(t, hasOther, "other annotations should be preserved")
|
||||
assert.Equal(t, "other-value", otherValue)
|
||||
})
|
||||
|
||||
t.Run("persistAppStatus with explicit annotations", func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyRefresh: string(v1alpha1.RefreshTypeNormal),
|
||||
v1alpha1.AnnotationKeyHydrate: string(v1alpha1.HydrateTypeNormal),
|
||||
"other-annotation": "other-value",
|
||||
}
|
||||
app.Status.Sync.Status = v1alpha1.SyncStatusCodeSynced
|
||||
app.Status.Health.Status = health.HealthStatusHealthy
|
||||
|
||||
ctrl := newFakeController(t.Context(), &fakeData{apps: []runtime.Object{app}}, nil)
|
||||
|
||||
origApp := app.DeepCopy()
|
||||
newStatus := app.Status.DeepCopy()
|
||||
|
||||
// Create annotations that delete hydrate but keep refresh
|
||||
newAnnotations := make(map[string]string)
|
||||
maps.Copy(newAnnotations, origApp.Annotations)
|
||||
delete(newAnnotations, v1alpha1.AnnotationKeyHydrate)
|
||||
|
||||
ctrl.persistAppStatus(origApp, newStatus, newAnnotations)
|
||||
|
||||
// Verify the patch was created correctly
|
||||
patchedApp, err := ctrl.applicationClientset.ArgoprojV1alpha1().Applications(app.Namespace).Get(context.Background(), app.Name, metav1.GetOptions{})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Hydrate annotation should be deleted
|
||||
_, hasHydrate := patchedApp.Annotations[v1alpha1.AnnotationKeyHydrate]
|
||||
assert.False(t, hasHydrate, "hydrate annotation should be deleted")
|
||||
|
||||
// Refresh annotation should still exist
|
||||
refreshValue, hasRefresh := patchedApp.Annotations[v1alpha1.AnnotationKeyRefresh]
|
||||
assert.True(t, hasRefresh, "refresh annotation should still exist")
|
||||
assert.Equal(t, string(v1alpha1.RefreshTypeNormal), refreshValue)
|
||||
|
||||
// Other annotations should be preserved
|
||||
otherValue, hasOther := patchedApp.Annotations["other-annotation"]
|
||||
assert.True(t, hasOther, "other annotations should be preserved")
|
||||
assert.Equal(t, "other-value", otherValue)
|
||||
})
|
||||
}
|
||||
|
|
|
|||
|
|
@ -132,11 +132,11 @@ func (c *clusterInfoUpdater) getUpdatedClusterInfo(ctx context.Context, apps []*
|
|||
continue
|
||||
}
|
||||
}
|
||||
destCluster, err := argo.GetDestinationCluster(ctx, a.Spec.Destination, c.db)
|
||||
destServer, err := argo.GetDestinationServer(ctx, a.Spec.Destination, c.db)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if destCluster.Server == cluster.Server {
|
||||
if destServer == cluster.Server {
|
||||
appCount++
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -101,6 +101,121 @@ func TestClusterSecretUpdater(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestGetUpdatedClusterInfo_AppCount(t *testing.T) {
|
||||
const fakeNamespace = "fake-ns"
|
||||
const clusterServer = "https://prod.example.com"
|
||||
const clusterName = "prod"
|
||||
|
||||
emptyArgoCDConfigMap := &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: common.ArgoCDConfigMapName,
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
|
||||
},
|
||||
Data: map[string]string{},
|
||||
}
|
||||
argoCDSecret := &corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: common.ArgoCDSecretName,
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
|
||||
},
|
||||
Data: map[string][]byte{"admin.password": nil, "server.secretkey": nil},
|
||||
}
|
||||
clusterSecret := &corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "prod-cluster",
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeCluster},
|
||||
Annotations: map[string]string{
|
||||
common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD,
|
||||
},
|
||||
},
|
||||
Data: map[string][]byte{
|
||||
"name": []byte(clusterName),
|
||||
"server": []byte(clusterServer),
|
||||
"config": []byte("{}"),
|
||||
},
|
||||
}
|
||||
|
||||
kubeclientset := fake.NewClientset(emptyArgoCDConfigMap, argoCDSecret, clusterSecret)
|
||||
settingsManager := settings.NewSettingsManager(t.Context(), kubeclientset, fakeNamespace)
|
||||
argoDB := db.NewDB(fakeNamespace, settingsManager, kubeclientset)
|
||||
|
||||
apps := []*v1alpha1.Application{
|
||||
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Name: clusterName}}},
|
||||
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Server: clusterServer}}},
|
||||
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Server: "https://other.example.com"}}},
|
||||
}
|
||||
|
||||
updater := &clusterInfoUpdater{db: argoDB, namespace: fakeNamespace}
|
||||
cluster := v1alpha1.Cluster{Server: clusterServer}
|
||||
|
||||
info := updater.getUpdatedClusterInfo(t.Context(), apps, cluster, nil, metav1.Now())
|
||||
|
||||
assert.Equal(t, int64(2), info.ApplicationsCount)
|
||||
}
|
||||
|
||||
func TestGetUpdatedClusterInfo_AmbiguousName(t *testing.T) {
|
||||
const fakeNamespace = "fake-ns"
|
||||
const clusterServer = "https://prod.example.com"
|
||||
const clusterName = "prod"
|
||||
|
||||
emptyArgoCDConfigMap := &corev1.ConfigMap{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: common.ArgoCDConfigMapName,
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
|
||||
},
|
||||
Data: map[string]string{},
|
||||
}
|
||||
argoCDSecret := &corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: common.ArgoCDSecretName,
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{"app.kubernetes.io/part-of": "argocd"},
|
||||
},
|
||||
Data: map[string][]byte{"admin.password": nil, "server.secretkey": nil},
|
||||
}
|
||||
makeClusterSecret := func(secretName, server string) *corev1.Secret {
|
||||
return &corev1.Secret{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: secretName,
|
||||
Namespace: fakeNamespace,
|
||||
Labels: map[string]string{common.LabelKeySecretType: common.LabelValueSecretTypeCluster},
|
||||
Annotations: map[string]string{
|
||||
common.AnnotationKeyManagedBy: common.AnnotationValueManagedByArgoCD,
|
||||
},
|
||||
},
|
||||
Data: map[string][]byte{
|
||||
"name": []byte(clusterName),
|
||||
"server": []byte(server),
|
||||
"config": []byte("{}"),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Two secrets share the same cluster name
|
||||
kubeclientset := fake.NewClientset(
|
||||
emptyArgoCDConfigMap, argoCDSecret,
|
||||
makeClusterSecret("prod-cluster-1", clusterServer),
|
||||
makeClusterSecret("prod-cluster-2", "https://prod2.example.com"),
|
||||
)
|
||||
settingsManager := settings.NewSettingsManager(t.Context(), kubeclientset, fakeNamespace)
|
||||
argoDB := db.NewDB(fakeNamespace, settingsManager, kubeclientset)
|
||||
|
||||
apps := []*v1alpha1.Application{
|
||||
{Spec: v1alpha1.ApplicationSpec{Destination: v1alpha1.ApplicationDestination{Name: clusterName}}},
|
||||
}
|
||||
|
||||
updater := &clusterInfoUpdater{db: argoDB, namespace: fakeNamespace}
|
||||
cluster := v1alpha1.Cluster{Server: clusterServer}
|
||||
|
||||
info := updater.getUpdatedClusterInfo(t.Context(), apps, cluster, nil, metav1.Now())
|
||||
|
||||
assert.Equal(t, int64(0), info.ApplicationsCount, "ambiguous name should not count app")
|
||||
}
|
||||
|
||||
func TestUpdateClusterLabels(t *testing.T) {
|
||||
shouldNotBeInvoked := func(_ context.Context, _ *v1alpha1.Cluster) (*v1alpha1.Cluster, error) {
|
||||
shouldNotHappen := errors.New("if an error happens here, something's wrong")
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ import (
|
|||
"github.com/argoproj/argo-cd/gitops-engine/pkg/sync/hook"
|
||||
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
|
||||
log "github.com/sirupsen/logrus"
|
||||
apierrors "k8s.io/apimachinery/pkg/api/errors"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/client-go/rest"
|
||||
|
|
@ -76,6 +77,21 @@ func isPostDeleteHook(obj *unstructured.Unstructured) bool {
|
|||
return isHookOfType(obj, PostDeleteHookType)
|
||||
}
|
||||
|
||||
// hasGitOpsEngineSyncPhaseHook is true when gitops-engine would run the resource during a sync
|
||||
// phase (PreSync, Sync, PostSync, SyncFail). PreDelete/PostDelete are not sync phases;
|
||||
// without this check, state reconciliation drops such resources
|
||||
// entirely because isPreDeleteHook/isPostDeleteHook match any comma-separated value.
|
||||
// HookTypeSkip is omitted as it is not a sync phase.
|
||||
func hasGitOpsEngineSyncPhaseHook(obj *unstructured.Unstructured) bool {
|
||||
for _, t := range hook.Types(obj) {
|
||||
switch t {
|
||||
case common.HookTypePreSync, common.HookTypeSync, common.HookTypePostSync, common.HookTypeSyncFail:
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// executeHooks is a generic function to execute hooks of a specified type
|
||||
func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Application, proj *appv1.AppProject, liveObjs map[kube.ResourceKey]*unstructured.Unstructured, config *rest.Config, logCtx *log.Entry) (bool, error) {
|
||||
appLabelKey, err := ctrl.settingsMgr.GetAppInstanceLabelKey()
|
||||
|
|
@ -88,6 +104,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
revisions = append(revisions, src.TargetRevision)
|
||||
}
|
||||
|
||||
// Fetch target objects from Git to know which hooks should exist
|
||||
targets, _, _, err := ctrl.appStateManager.GetRepoObjs(context.Background(), app, app.Spec.GetSources(), appLabelKey, revisions, false, false, false, proj, true)
|
||||
if err != nil {
|
||||
return false, err
|
||||
|
|
@ -110,14 +127,14 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
if !isHookOfType(obj, hookType) {
|
||||
continue
|
||||
}
|
||||
if runningHook := runningHooks[kube.GetResourceKey(obj)]; runningHook == nil {
|
||||
if _, alreadyExists := runningHooks[kube.GetResourceKey(obj)]; !alreadyExists {
|
||||
expectedHook[kube.GetResourceKey(obj)] = obj
|
||||
}
|
||||
}
|
||||
|
||||
// Create hooks that don't exist yet
|
||||
createdCnt := 0
|
||||
for _, obj := range expectedHook {
|
||||
for key, obj := range expectedHook {
|
||||
// Add app instance label so the hook can be tracked and cleaned up
|
||||
labels := obj.GetLabels()
|
||||
if labels == nil {
|
||||
|
|
@ -126,8 +143,13 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
labels[appLabelKey] = app.InstanceName(ctrl.namespace)
|
||||
obj.SetLabels(labels)
|
||||
|
||||
logCtx.Infof("Creating %s hook resource: %s", hookType, key)
|
||||
_, err = ctrl.kubectl.CreateResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), obj, metav1.CreateOptions{})
|
||||
if err != nil {
|
||||
if apierrors.IsAlreadyExists(err) {
|
||||
logCtx.Warnf("Hook resource %s already exists, skipping", key)
|
||||
continue
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
createdCnt++
|
||||
|
|
@ -148,7 +170,8 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
progressingHooksCount := 0
|
||||
var failedHooks []string
|
||||
var failedHookObjects []*unstructured.Unstructured
|
||||
for _, obj := range runningHooks {
|
||||
|
||||
for key, obj := range runningHooks {
|
||||
hookHealth, err := health.GetResourceHealth(obj, healthOverrides)
|
||||
if err != nil {
|
||||
return false, err
|
||||
|
|
@ -165,12 +188,17 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
Status: health.HealthStatusHealthy,
|
||||
}
|
||||
}
|
||||
|
||||
switch hookHealth.Status {
|
||||
case health.HealthStatusProgressing:
|
||||
logCtx.Debugf("Hook %s is progressing", key)
|
||||
progressingHooksCount++
|
||||
case health.HealthStatusDegraded:
|
||||
logCtx.Warnf("Hook %s is degraded: %s", key, hookHealth.Message)
|
||||
failedHooks = append(failedHooks, fmt.Sprintf("%s/%s", obj.GetNamespace(), obj.GetName()))
|
||||
failedHookObjects = append(failedHookObjects, obj)
|
||||
case health.HealthStatusHealthy:
|
||||
logCtx.Debugf("Hook %s is healthy", key)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -179,7 +207,7 @@ func (ctrl *ApplicationController) executeHooks(hookType HookType, app *appv1.Ap
|
|||
logCtx.Infof("Deleting %d failed %s hook(s) to allow retry", len(failedHookObjects), hookType)
|
||||
for _, obj := range failedHookObjects {
|
||||
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
if err != nil && !apierrors.IsNotFound(err) {
|
||||
logCtx.WithError(err).Warnf("Failed to delete failed hook %s/%s", obj.GetNamespace(), obj.GetName())
|
||||
}
|
||||
}
|
||||
|
|
@ -226,6 +254,10 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
|
|||
hooks = append(hooks, obj)
|
||||
}
|
||||
|
||||
if len(hooks) == 0 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// Process hooks for deletion
|
||||
for _, obj := range hooks {
|
||||
deletePolicies := hook.DeletePolicies(obj)
|
||||
|
|
@ -252,7 +284,7 @@ func (ctrl *ApplicationController) cleanupHooks(hookType HookType, liveObjs map[
|
|||
}
|
||||
logCtx.Infof("Deleting %s hook %s/%s", hookType, obj.GetNamespace(), obj.GetName())
|
||||
err = ctrl.kubectl.DeleteResource(context.Background(), config, obj.GroupVersionKind(), obj.GetName(), obj.GetNamespace(), metav1.DeleteOptions{})
|
||||
if err != nil {
|
||||
if err != nil && !apierrors.IsNotFound(err) {
|
||||
return false, err
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ package controller
|
|||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/argoproj/argo-cd/gitops-engine/pkg/utils/kube"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
)
|
||||
|
||||
func TestIsHookOfType(t *testing.T) {
|
||||
|
|
@ -192,6 +194,92 @@ func TestIsPostDeleteHook(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// TestPartitionTargetObjsForSync covers partitionTargetObjsForSync in state.go.
|
||||
func TestPartitionTargetObjsForSync(t *testing.T) {
|
||||
newObj := func(name string, annot map[string]string) *unstructured.Unstructured {
|
||||
u := &unstructured.Unstructured{}
|
||||
u.SetName(name)
|
||||
u.SetAnnotations(annot)
|
||||
return u
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
in []*unstructured.Unstructured
|
||||
wantNames []string
|
||||
wantPreDelete bool
|
||||
wantPostDelete bool
|
||||
}{
|
||||
{
|
||||
name: "PostSync with PreDelete and PostDelete in same annotation stays in sync set",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("combined", map[string]string{"argocd.argoproj.io/hook": "PostSync,PreDelete,PostDelete"}),
|
||||
},
|
||||
wantNames: []string{"combined"},
|
||||
wantPreDelete: true,
|
||||
wantPostDelete: true,
|
||||
},
|
||||
{
|
||||
name: "PreDelete-only manifest excluded from sync",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("pre-del", map[string]string{"argocd.argoproj.io/hook": "PreDelete"}),
|
||||
},
|
||||
wantNames: nil,
|
||||
wantPreDelete: true,
|
||||
wantPostDelete: false,
|
||||
},
|
||||
{
|
||||
name: "PostDelete-only manifest excluded from sync",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("post-del", map[string]string{"argocd.argoproj.io/hook": "PostDelete"}),
|
||||
},
|
||||
wantNames: nil,
|
||||
wantPreDelete: false,
|
||||
wantPostDelete: true,
|
||||
},
|
||||
{
|
||||
name: "Helm pre-delete only excluded from sync",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("helm-pre-del", map[string]string{"helm.sh/hook": "pre-delete"}),
|
||||
},
|
||||
wantNames: nil,
|
||||
wantPreDelete: true,
|
||||
wantPostDelete: false,
|
||||
},
|
||||
{
|
||||
name: "Helm pre-install with pre-delete stays in sync (sync-phase hook wins)",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("helm-mixed", map[string]string{"helm.sh/hook": "pre-install,pre-delete"}),
|
||||
},
|
||||
wantNames: []string{"helm-mixed"},
|
||||
wantPreDelete: true,
|
||||
wantPostDelete: false,
|
||||
},
|
||||
{
|
||||
name: "Non-hook resource unchanged",
|
||||
in: []*unstructured.Unstructured{
|
||||
newObj("pod", map[string]string{"app": "x"}),
|
||||
},
|
||||
wantNames: []string{"pod"},
|
||||
wantPreDelete: false,
|
||||
wantPostDelete: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, hasPre, hasPost := partitionTargetObjsForSync(tt.in)
|
||||
var names []string
|
||||
for _, o := range got {
|
||||
names = append(names, o.GetName())
|
||||
}
|
||||
assert.Equal(t, tt.wantNames, names)
|
||||
assert.Equal(t, tt.wantPreDelete, hasPre, "hasPreDeleteHooks")
|
||||
assert.Equal(t, tt.wantPostDelete, hasPost, "hasPostDeleteHooks")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestMultiHookOfType(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
|
|
@ -226,3 +314,174 @@ func TestMultiHookOfType(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecuteHooksAlreadyExistsLogic(t *testing.T) {
|
||||
newObj := func(name string, annot map[string]string) *unstructured.Unstructured {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: "batch", Version: "v1", Kind: "Job"})
|
||||
obj.SetName(name)
|
||||
obj.SetNamespace("default")
|
||||
obj.SetAnnotations(annot)
|
||||
return obj
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
hookType []HookType
|
||||
targetAnnot map[string]string
|
||||
liveAnnot map[string]string // nil -> object doesn't exist in cluster
|
||||
expectCreated bool
|
||||
}{
|
||||
// PRE DELETE TESTS
|
||||
{
|
||||
name: "PreDelete (argocd): Not in cluster - should be created",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
liveAnnot: nil,
|
||||
expectCreated: true,
|
||||
},
|
||||
{
|
||||
name: "PreDelete (helm): Not in cluster - should be created",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
liveAnnot: nil,
|
||||
expectCreated: true,
|
||||
},
|
||||
{
|
||||
name: "PreDelete (argocd): Already exists - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PreDelete (argocd): Already exists - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "pre-delete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PreDelete (helm+argocd): One of two already exists - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "pre-delete", "argocd.argoproj.io/hook": "PreDelete"},
|
||||
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
// POST DELETE TESTS
|
||||
{
|
||||
name: "PostDelete (argocd): Not in cluster - should be created",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
liveAnnot: nil,
|
||||
expectCreated: true,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (helm): Not in cluster - should be created",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
liveAnnot: nil,
|
||||
expectCreated: true,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (argocd): Already exists - should be skipped",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (helm): Already exists - should be skipped",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (helm+argocd): Already exists - should be skipped",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "post-delete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "PostDelete (helm+argocd): One of two already exists - should be skipped",
|
||||
hookType: []HookType{PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete", "argocd.argoproj.io/hook": "PostDelete"},
|
||||
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PostDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
// MULTI HOOK TESTS - SKIP LOGIC
|
||||
{
|
||||
name: "Multi-hook (argocd): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
|
||||
liveAnnot: map[string]string{"argocd.argoproj.io/hook": "PreDelete,PostDelete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
{
|
||||
name: "Multi-hook (helm): Target is (Pre,Post), Cluster has (Pre,Post) - should be skipped",
|
||||
hookType: []HookType{PreDeleteHookType, PostDeleteHookType},
|
||||
targetAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
|
||||
liveAnnot: map[string]string{"helm.sh/hook": "post-delete,pre-delete"},
|
||||
expectCreated: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
targetObj := newObj("my-hook", tt.targetAnnot)
|
||||
targetKey := kube.GetResourceKey(targetObj)
|
||||
|
||||
liveObjs := make(map[kube.ResourceKey]*unstructured.Unstructured)
|
||||
if tt.liveAnnot != nil {
|
||||
liveObjs[targetKey] = newObj("my-hook", tt.liveAnnot)
|
||||
}
|
||||
|
||||
runningHooks := map[kube.ResourceKey]*unstructured.Unstructured{}
|
||||
for key, obj := range liveObjs {
|
||||
for _, hookType := range tt.hookType {
|
||||
if isHookOfType(obj, hookType) {
|
||||
runningHooks[key] = obj
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
expectedHooksToCreate := map[kube.ResourceKey]*unstructured.Unstructured{}
|
||||
targets := []*unstructured.Unstructured{targetObj}
|
||||
|
||||
for _, obj := range targets {
|
||||
for _, hookType := range tt.hookType {
|
||||
if !isHookOfType(obj, hookType) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
objKey := kube.GetResourceKey(obj)
|
||||
if _, alreadyExists := runningHooks[objKey]; !alreadyExists {
|
||||
expectedHooksToCreate[objKey] = obj
|
||||
}
|
||||
}
|
||||
|
||||
if tt.expectCreated {
|
||||
assert.NotEmpty(t, expectedHooksToCreate, "Expected hook to be marked for creation")
|
||||
} else {
|
||||
assert.Empty(t, expectedHooksToCreate, "Expected hook to be skipped (already exists)")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -60,8 +60,8 @@ type Dependencies interface {
|
|||
// trigger a refresh after the application has been hydrated and a new commit has been pushed.
|
||||
RequestAppRefresh(appName string, appNamespace string) error
|
||||
|
||||
// PersistAppHydratorStatus persists the application status for the source hydrator.
|
||||
PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
|
||||
// PersistHydrationStatus persists the application status for the source hydrator.
|
||||
PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus)
|
||||
|
||||
// AddHydrationQueueItem adds a hydration queue item to the queue. This is used to trigger the hydration process for
|
||||
// a group of applications which are hydrating to the same repo and target branch.
|
||||
|
|
@ -123,9 +123,10 @@ func (h *Hydrator) ProcessAppHydrateQueueItem(origApp *appv1.Application) {
|
|||
Phase: appv1.HydrateOperationPhaseHydrating,
|
||||
SourceHydrator: *app.Spec.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
|
||||
|
||||
needsRefresh := app.Status.SourceHydrator.CurrentOperation.Phase == appv1.HydrateOperationPhaseHydrating && metav1.Now().Sub(app.Status.SourceHydrator.CurrentOperation.StartedAt.Time) > h.statusRefreshTimeout
|
||||
if needsHydration || needsRefresh {
|
||||
logCtx.WithField("reason", reason).Info("Hydrating app")
|
||||
|
|
@ -252,7 +253,7 @@ func (h *Hydrator) ProcessHydrationQueueItem(hydrationKey types.HydrationQueueKe
|
|||
HydratedSHA: hydratedSHA,
|
||||
SourceHydrator: app.Status.SourceHydrator.CurrentOperation.SourceHydrator,
|
||||
}
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
|
||||
|
||||
// Request a refresh since we pushed a new commit.
|
||||
err := h.dependencies.RequestAppRefresh(app.Name, app.Namespace)
|
||||
|
|
@ -274,7 +275,7 @@ func (h *Hydrator) setAppHydratorError(app *appv1.Application, err error) {
|
|||
failedAt := metav1.Now()
|
||||
app.Status.SourceHydrator.CurrentOperation.FinishedAt = &failedAt
|
||||
app.Status.SourceHydrator.CurrentOperation.Message = fmt.Sprintf("Failed to hydrate: %v", err.Error())
|
||||
h.dependencies.PersistAppHydratorStatus(origApp, &app.Status.SourceHydrator)
|
||||
h.dependencies.PersistHydrationStatus(origApp, &app.Status.SourceHydrator)
|
||||
}
|
||||
|
||||
// getAppsForHydrationKey returns the applications matching the hydration key.
|
||||
|
|
@ -476,17 +477,9 @@ func (h *Hydrator) hydrate(logCtx *log.Entry, apps []*appv1.Application, project
|
|||
//
|
||||
// If the given target revision is empty, it uses the target revision from the app dry source spec.
|
||||
func (h *Hydrator) getManifests(ctx context.Context, app *appv1.Application, targetRevision string, project *appv1.AppProject) (revision string, pathDetails *commitclient.PathDetails, err error) {
|
||||
drySource := appv1.ApplicationSource{
|
||||
RepoURL: app.Spec.SourceHydrator.DrySource.RepoURL,
|
||||
Path: app.Spec.SourceHydrator.DrySource.Path,
|
||||
TargetRevision: app.Spec.SourceHydrator.DrySource.TargetRevision,
|
||||
Helm: app.Spec.SourceHydrator.DrySource.Helm,
|
||||
Kustomize: app.Spec.SourceHydrator.DrySource.Kustomize,
|
||||
Directory: app.Spec.SourceHydrator.DrySource.Directory,
|
||||
Plugin: app.Spec.SourceHydrator.DrySource.Plugin,
|
||||
}
|
||||
drySource := app.Spec.SourceHydrator.GetDrySource()
|
||||
if targetRevision == "" {
|
||||
targetRevision = app.Spec.SourceHydrator.DrySource.TargetRevision
|
||||
targetRevision = drySource.TargetRevision
|
||||
}
|
||||
|
||||
// TODO: enable signature verification
|
||||
|
|
|
|||
|
|
@ -394,7 +394,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
|
|||
app.Status.SourceHydrator.CurrentOperation = nil
|
||||
|
||||
var persistedStatus *v1alpha1.SourceHydratorStatus
|
||||
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
persistedStatus = newStatus
|
||||
}).Return().Once()
|
||||
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
|
||||
|
|
@ -406,7 +406,7 @@ func TestProcessAppHydrateQueueItem_HydrationNeeded(t *testing.T) {
|
|||
|
||||
h.ProcessAppHydrateQueueItem(app)
|
||||
|
||||
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
|
||||
|
||||
require.NotNil(t, persistedStatus)
|
||||
|
|
@ -433,6 +433,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
|
|||
},
|
||||
}
|
||||
d.EXPECT().AddHydrationQueueItem(mock.Anything).Return().Once()
|
||||
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
|
||||
|
||||
h := &Hydrator{
|
||||
dependencies: d,
|
||||
|
|
@ -442,7 +443,7 @@ func TestProcessAppHydrateQueueItem_HydrationPassedTimeout(t *testing.T) {
|
|||
h.ProcessAppHydrateQueueItem(app)
|
||||
|
||||
d.AssertCalled(t, "AddHydrationQueueItem", mock.Anything)
|
||||
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
|
||||
}
|
||||
|
||||
func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
|
||||
|
|
@ -458,7 +459,7 @@ func TestProcessAppHydrateQueueItem_NoSourceHydrator(t *testing.T) {
|
|||
h.ProcessAppHydrateQueueItem(app)
|
||||
|
||||
// Should not call anything
|
||||
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
|
||||
d.AssertNotCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
|
||||
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
|
||||
}
|
||||
|
||||
|
|
@ -476,14 +477,15 @@ func TestProcessAppHydrateQueueItem_HydrationNotNeeded(t *testing.T) {
|
|||
},
|
||||
}
|
||||
|
||||
d.EXPECT().PersistHydrationStatus(app, &app.Status.SourceHydrator).Return().Once()
|
||||
|
||||
h := &Hydrator{
|
||||
dependencies: d,
|
||||
statusRefreshTimeout: time.Minute,
|
||||
}
|
||||
h.ProcessAppHydrateQueueItem(app)
|
||||
|
||||
// Should not call anything
|
||||
d.AssertNotCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
|
||||
d.AssertNotCalled(t, "AddHydrationQueueItem", mock.Anything)
|
||||
}
|
||||
|
||||
|
|
@ -504,7 +506,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
|
|||
// Expect setAppHydratorError to be called
|
||||
var persistedStatus1 *v1alpha1.SourceHydratorStatus
|
||||
var persistedStatus2 *v1alpha1.SourceHydratorStatus
|
||||
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
switch orig.Name {
|
||||
case app1.Name:
|
||||
persistedStatus1 = newStatus
|
||||
|
|
@ -524,7 +526,7 @@ func TestProcessHydrationQueueItem_ValidationFails(t *testing.T) {
|
|||
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
|
||||
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
|
||||
|
||||
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
|
||||
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
|
||||
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
|
||||
}
|
||||
|
||||
|
|
@ -548,7 +550,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
|
|||
// Expect setAppHydratorError to be called
|
||||
var persistedStatus1 *v1alpha1.SourceHydratorStatus
|
||||
var persistedStatus2 *v1alpha1.SourceHydratorStatus
|
||||
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
switch orig.Name {
|
||||
case app1.Name:
|
||||
persistedStatus1 = newStatus
|
||||
|
|
@ -568,7 +570,7 @@ func TestProcessHydrationQueueItem_HydrateFails_AppSpecificError(t *testing.T) {
|
|||
assert.Contains(t, persistedStatus2.CurrentOperation.Message, "cannot hydrate because application default/test-app has an error")
|
||||
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
|
||||
|
||||
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
|
||||
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
|
||||
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
|
||||
}
|
||||
|
||||
|
|
@ -593,7 +595,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
|
|||
// Expect setAppHydratorError to be called
|
||||
var persistedStatus1 *v1alpha1.SourceHydratorStatus
|
||||
var persistedStatus2 *v1alpha1.SourceHydratorStatus
|
||||
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
switch orig.Name {
|
||||
case app1.Name:
|
||||
persistedStatus1 = newStatus
|
||||
|
|
@ -615,7 +617,7 @@ func TestProcessHydrationQueueItem_HydrateFails_CommonError(t *testing.T) {
|
|||
assert.Equal(t, v1alpha1.HydrateOperationPhaseFailed, persistedStatus1.CurrentOperation.Phase)
|
||||
assert.Equal(t, "abc123", persistedStatus1.CurrentOperation.DrySHA)
|
||||
|
||||
d.AssertNumberOfCalls(t, "PersistAppHydratorStatus", 2)
|
||||
d.AssertNumberOfCalls(t, "PersistHydrationStatus", 2)
|
||||
d.AssertNotCalled(t, "RequestAppRefresh", mock.Anything, mock.Anything)
|
||||
}
|
||||
|
||||
|
|
@ -633,7 +635,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
|
|||
|
||||
// Expect setAppHydratorError to be called
|
||||
var persistedStatus *v1alpha1.SourceHydratorStatus
|
||||
d.EXPECT().PersistAppHydratorStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
d.EXPECT().PersistHydrationStatus(mock.Anything, mock.Anything).Run(func(_ *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
persistedStatus = newStatus
|
||||
}).Return().Once()
|
||||
d.EXPECT().RequestAppRefresh(app.Name, app.Namespace).Return(nil).Once()
|
||||
|
|
@ -650,7 +652,7 @@ func TestProcessHydrationQueueItem_SuccessfulHydration(t *testing.T) {
|
|||
|
||||
h.ProcessHydrationQueueItem(hydrationKey)
|
||||
|
||||
d.AssertCalled(t, "PersistAppHydratorStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "PersistHydrationStatus", mock.Anything, mock.Anything)
|
||||
d.AssertCalled(t, "RequestAppRefresh", app.Name, app.Namespace)
|
||||
assert.NotNil(t, persistedStatus)
|
||||
assert.Equal(t, app.Status.SourceHydrator.CurrentOperation.StartedAt, persistedStatus.CurrentOperation.StartedAt)
|
||||
|
|
|
|||
20
controller/hydrator/mocks/Dependencies.go
generated
20
controller/hydrator/mocks/Dependencies.go
generated
|
|
@ -525,25 +525,25 @@ func (_c *Dependencies_GetWriteCredentials_Call) RunAndReturn(run func(ctx conte
|
|||
return _c
|
||||
}
|
||||
|
||||
// PersistAppHydratorStatus provides a mock function for the type Dependencies
|
||||
func (_mock *Dependencies) PersistAppHydratorStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
// PersistHydrationStatus provides a mock function for the type Dependencies
|
||||
func (_mock *Dependencies) PersistHydrationStatus(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus) {
|
||||
_mock.Called(orig, newStatus)
|
||||
return
|
||||
}
|
||||
|
||||
// Dependencies_PersistAppHydratorStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistAppHydratorStatus'
|
||||
type Dependencies_PersistAppHydratorStatus_Call struct {
|
||||
// Dependencies_PersistHydrationStatus_Call is a *mock.Call that shadows Run/Return methods with type explicit version for method 'PersistHydrationStatus'
|
||||
type Dependencies_PersistHydrationStatus_Call struct {
|
||||
*mock.Call
|
||||
}
|
||||
|
||||
// PersistAppHydratorStatus is a helper method to define mock.On call
|
||||
// PersistHydrationStatus is a helper method to define mock.On call
|
||||
// - orig *v1alpha1.Application
|
||||
// - newStatus *v1alpha1.SourceHydratorStatus
|
||||
func (_e *Dependencies_Expecter) PersistAppHydratorStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistAppHydratorStatus_Call {
|
||||
return &Dependencies_PersistAppHydratorStatus_Call{Call: _e.mock.On("PersistAppHydratorStatus", orig, newStatus)}
|
||||
func (_e *Dependencies_Expecter) PersistHydrationStatus(orig interface{}, newStatus interface{}) *Dependencies_PersistHydrationStatus_Call {
|
||||
return &Dependencies_PersistHydrationStatus_Call{Call: _e.mock.On("PersistHydrationStatus", orig, newStatus)}
|
||||
}
|
||||
|
||||
func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
|
||||
func (_c *Dependencies_PersistHydrationStatus_Call) Run(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
|
||||
_c.Call.Run(func(args mock.Arguments) {
|
||||
var arg0 *v1alpha1.Application
|
||||
if args[0] != nil {
|
||||
|
|
@ -561,12 +561,12 @@ func (_c *Dependencies_PersistAppHydratorStatus_Call) Run(run func(orig *v1alpha
|
|||
return _c
|
||||
}
|
||||
|
||||
func (_c *Dependencies_PersistAppHydratorStatus_Call) Return() *Dependencies_PersistAppHydratorStatus_Call {
|
||||
func (_c *Dependencies_PersistHydrationStatus_Call) Return() *Dependencies_PersistHydrationStatus_Call {
|
||||
_c.Call.Return()
|
||||
return _c
|
||||
}
|
||||
|
||||
func (_c *Dependencies_PersistAppHydratorStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistAppHydratorStatus_Call {
|
||||
func (_c *Dependencies_PersistHydrationStatus_Call) RunAndReturn(run func(orig *v1alpha1.Application, newStatus *v1alpha1.SourceHydratorStatus)) *Dependencies_PersistHydrationStatus_Call {
|
||||
_c.Run(run)
|
||||
return _c
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ package controller
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"maps"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/controller/hydrator/types"
|
||||
appv1 "github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
|
|
@ -88,10 +89,13 @@ func (ctrl *ApplicationController) RequestAppRefresh(appName string, appNamespac
|
|||
return nil
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) PersistAppHydratorStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
|
||||
func (ctrl *ApplicationController) PersistHydrationStatus(orig *appv1.Application, newStatus *appv1.SourceHydratorStatus) {
|
||||
newAnnotations := make(map[string]string)
|
||||
maps.Copy(newAnnotations, orig.GetAnnotations())
|
||||
delete(newAnnotations, appv1.AnnotationKeyHydrate)
|
||||
status := orig.Status.DeepCopy()
|
||||
status.SourceHydrator = *newStatus
|
||||
ctrl.persistAppStatus(orig, status)
|
||||
ctrl.persistAppStatus(orig, status, newAnnotations)
|
||||
}
|
||||
|
||||
func (ctrl *ApplicationController) AddHydrationQueueItem(key types.HydrationQueueKey) {
|
||||
|
|
|
|||
|
|
@ -222,7 +222,10 @@ func createConsistentHashingWithBoundLoads(replicas int, getCluster clusterAcces
|
|||
}
|
||||
shardIndexedByCluster[c.ID], err = strconv.Atoi(clusterIndex)
|
||||
if err != nil {
|
||||
log.Errorf("Consistent Hashing was supposed to return a shard index but it returned %d", err)
|
||||
log.Errorf("Failed to get shard index from consistent hashing, error=%v", err)
|
||||
// No continue here: strconv.Atoi returns 0 on failure, so the cluster falls back to shard 0.
|
||||
// This is intentional since shard 0 always exists (replicas > 0 is enforced by the caller),
|
||||
// so the cluster remains reconciled rather than being silently dropped.
|
||||
}
|
||||
numApps, ok := appDistribution[c.Server]
|
||||
if !ok {
|
||||
|
|
|
|||
|
|
@ -41,18 +41,13 @@ import (
|
|||
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
|
||||
appstatecache "github.com/argoproj/argo-cd/v3/util/cache/appstate"
|
||||
"github.com/argoproj/argo-cd/v3/util/db"
|
||||
"github.com/argoproj/argo-cd/v3/util/env"
|
||||
"github.com/argoproj/argo-cd/v3/util/gpg"
|
||||
utilio "github.com/argoproj/argo-cd/v3/util/io"
|
||||
"github.com/argoproj/argo-cd/v3/util/settings"
|
||||
"github.com/argoproj/argo-cd/v3/util/stats"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrCompareStateRepo = errors.New("failed to get repo objects")
|
||||
|
||||
processManifestGeneratePathsEnabled = env.ParseBoolFromEnv("ARGOCD_APPLICATIONSET_CONTROLLER_PROCESS_MANIFEST_GENERATE_PATHS", true)
|
||||
)
|
||||
var ErrCompareStateRepo = errors.New("failed to get repo objects")
|
||||
|
||||
type resourceInfoProviderStub struct{}
|
||||
|
||||
|
|
@ -75,7 +70,7 @@ type managedResource struct {
|
|||
|
||||
// AppStateManager defines methods which allow to compare application spec and actual application state.
|
||||
type AppStateManager interface {
|
||||
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
|
||||
CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localObjects []string, hasMultipleSources bool) (*comparisonResult, error)
|
||||
SyncAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, state *v1alpha1.OperationState)
|
||||
GetRepoObjs(ctx context.Context, app *v1alpha1.Application, sources []v1alpha1.ApplicationSource, appLabelKey string, revisions []string, noCache, noRevisionCache, verifySignature bool, proj *v1alpha1.AppProject, sendRuntimeState bool) ([]*unstructured.Unstructured, []*apiclient.ManifestResponse, bool, error)
|
||||
}
|
||||
|
|
@ -247,63 +242,20 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
|||
return nil, nil, false, fmt.Errorf("failed to get repo %q: %w", source.RepoURL, err)
|
||||
}
|
||||
|
||||
syncedRevision := app.Status.Sync.Revision
|
||||
if app.Spec.HasMultipleSources() {
|
||||
if i < len(app.Status.Sync.Revisions) {
|
||||
syncedRevision = app.Status.Sync.Revisions[i]
|
||||
} else {
|
||||
syncedRevision = ""
|
||||
}
|
||||
}
|
||||
|
||||
revision := revisions[i]
|
||||
|
||||
appNamespace := app.Spec.Destination.Namespace
|
||||
apiVersions := argo.APIResourcesToStrings(apiResources, true)
|
||||
|
||||
updateRevisions := processManifestGeneratePathsEnabled &&
|
||||
// updating revisions result is not required if automated sync is not enabled
|
||||
app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.Automated != nil &&
|
||||
// using updating revisions gains performance only if manifest generation is required.
|
||||
// just reading pre-generated manifests is comparable to updating revisions time-wise
|
||||
app.Status.SourceType != v1alpha1.ApplicationSourceTypeDirectory
|
||||
|
||||
if updateRevisions && repo.Depth == 0 && syncedRevision != "" && !source.IsRef() && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" && (syncedRevision != revision || app.Spec.HasMultipleSources()) {
|
||||
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
|
||||
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
|
||||
Repo: repo,
|
||||
Revision: revision,
|
||||
SyncedRevision: syncedRevision,
|
||||
NoRevisionCache: noRevisionCache,
|
||||
Paths: path.GetSourceRefreshPaths(app, source),
|
||||
AppLabelKey: appLabelKey,
|
||||
AppName: app.InstanceName(m.namespace),
|
||||
Namespace: appNamespace,
|
||||
ApplicationSource: &source,
|
||||
KubeVersion: serverVersion,
|
||||
ApiVersions: apiVersions,
|
||||
TrackingMethod: trackingMethod,
|
||||
RefSources: refSources,
|
||||
SyncedRefSources: syncedRefSources,
|
||||
HasMultipleSources: app.Spec.HasMultipleSources(),
|
||||
InstallationID: installationID,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to compare revisions for source %d of %d: %w", i+1, len(sources), err)
|
||||
}
|
||||
|
||||
if updateRevisionResult.Changes {
|
||||
revisionsMayHaveChanges = true
|
||||
}
|
||||
|
||||
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
|
||||
if updateRevisionResult.Revision != "" {
|
||||
revision = updateRevisionResult.Revision
|
||||
}
|
||||
} else if !source.IsRef() {
|
||||
// revisionsMayHaveChanges is set to true if at least one revision is not possible to be updated
|
||||
// Evaluate if the revision has changes
|
||||
resolvedRevision, hasChanges, err := m.evaluateRevisionChanges(ctx, repoClient, app, &source, i, repo, revision, refSources, syncedRefSources, noRevisionCache, appLabelKey, serverVersion, apiVersions, trackingMethod, installationID, keyManifestGenerateAnnotationExists, keyManifestGenerateAnnotationVal)
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to evaluate revision changes for source %d of %d: %w", i+1, len(sources), err)
|
||||
}
|
||||
if hasChanges {
|
||||
revisionsMayHaveChanges = true
|
||||
}
|
||||
revision = resolvedRevision
|
||||
|
||||
repos := permittedHelmRepos
|
||||
helmRepoCreds := permittedHelmCredentials
|
||||
|
|
@ -344,7 +296,11 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
|||
InstallationID: installationID,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, nil, false, fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
|
||||
genErr := fmt.Errorf("failed to generate manifest for source %d of %d: %w", i+1, len(sources), err)
|
||||
if app.Spec.SourceHydrator != nil && app.Spec.SourceHydrator.HydrateTo != nil && strings.Contains(err.Error(), path.ErrMessageAppPathDoesNotExist) {
|
||||
genErr = fmt.Errorf("%w - waiting for an external process to update %s from %s", genErr, app.Spec.SourceHydrator.SyncSource.TargetBranch, app.Spec.SourceHydrator.HydrateTo.TargetBranch)
|
||||
}
|
||||
return nil, nil, false, genErr
|
||||
}
|
||||
|
||||
targetObj, err := unmarshalManifests(manifestInfo.Manifests)
|
||||
|
|
@ -366,37 +322,84 @@ func (m *appStateManager) GetRepoObjs(ctx context.Context, app *v1alpha1.Applica
|
|||
return targetObjs, manifestInfos, revisionsMayHaveChanges, nil
|
||||
}
|
||||
|
||||
// ResolveGitRevision will resolve the given revision to a full commit SHA. Only works for git.
|
||||
func (m *appStateManager) ResolveGitRevision(repoURL, revision string) (string, error) {
|
||||
conn, repoClient, err := m.repoClientset.NewRepoServerClient()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to connect to repo server: %w", err)
|
||||
}
|
||||
defer utilio.Close(conn)
|
||||
|
||||
repo, err := m.db.GetRepository(context.Background(), repoURL, "")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to get repo %q: %w", repoURL, err)
|
||||
// evaluateRevisionChanges determines if a source revision has changes compared to the synced revision.
|
||||
// Returns the resolved revision, whether changes were detected, and any error.
|
||||
func (m *appStateManager) evaluateRevisionChanges(
|
||||
ctx context.Context,
|
||||
repoClient apiclient.RepoServerServiceClient,
|
||||
app *v1alpha1.Application,
|
||||
source *v1alpha1.ApplicationSource,
|
||||
sourceIndex int,
|
||||
repo *v1alpha1.Repository,
|
||||
revision string,
|
||||
refSources map[string]*v1alpha1.RefTarget,
|
||||
syncedRefSources v1alpha1.RefTargetRevisionMapping,
|
||||
noRevisionCache bool,
|
||||
appLabelKey string,
|
||||
serverVersion string,
|
||||
apiVersions []string,
|
||||
trackingMethod string,
|
||||
installationID string,
|
||||
keyManifestGenerateAnnotationExists bool,
|
||||
keyManifestGenerateAnnotationVal string,
|
||||
) (string, bool, error) {
|
||||
// For ref source specifically, we always return false since their change are evaluated as part of the source
|
||||
// referencing them.
|
||||
if source.IsRef() {
|
||||
return revision, false, nil
|
||||
}
|
||||
|
||||
// Mock the app. The repo-server only needs to know whether the "chart" field is populated.
|
||||
app := &v1alpha1.Application{
|
||||
Spec: v1alpha1.ApplicationSpec{
|
||||
Source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: repoURL,
|
||||
TargetRevision: revision,
|
||||
},
|
||||
},
|
||||
// Determine the synced revision and source type for this specific source
|
||||
var syncedRevision string
|
||||
if app.Spec.HasMultipleSources() {
|
||||
if sourceIndex < len(app.Status.Sync.Revisions) {
|
||||
syncedRevision = app.Status.Sync.Revisions[sourceIndex]
|
||||
}
|
||||
} else {
|
||||
syncedRevision = app.Status.Sync.Revision
|
||||
}
|
||||
resp, err := repoClient.ResolveRevision(context.Background(), &apiclient.ResolveRevisionRequest{
|
||||
Repo: repo,
|
||||
App: app,
|
||||
AmbiguousRevision: revision,
|
||||
})
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to determine whether the dry source has changed: %w", err)
|
||||
|
||||
// if revisions are the same (and we are not using reference sources), we know there is no changes
|
||||
if syncedRevision == revision && revision != "" && len(refSources) == 0 {
|
||||
return revision, false, nil
|
||||
}
|
||||
return resp.Revision, nil
|
||||
|
||||
appNamespace := app.Spec.Destination.Namespace
|
||||
|
||||
if repo.Depth == 0 && syncedRevision != "" && keyManifestGenerateAnnotationExists && keyManifestGenerateAnnotationVal != "" {
|
||||
// Validate the manifest-generate-path annotation to avoid generating manifests if it has not changed.
|
||||
updateRevisionResult, err := repoClient.UpdateRevisionForPaths(ctx, &apiclient.UpdateRevisionForPathsRequest{
|
||||
Repo: repo,
|
||||
Revision: revision,
|
||||
SyncedRevision: syncedRevision,
|
||||
NoRevisionCache: noRevisionCache,
|
||||
Paths: path.GetSourceRefreshPaths(app, *source),
|
||||
AppLabelKey: appLabelKey,
|
||||
AppName: app.InstanceName(m.namespace),
|
||||
Namespace: appNamespace,
|
||||
ApplicationSource: source,
|
||||
KubeVersion: serverVersion,
|
||||
ApiVersions: apiVersions,
|
||||
TrackingMethod: trackingMethod,
|
||||
RefSources: refSources,
|
||||
SyncedRefSources: syncedRefSources,
|
||||
HasMultipleSources: app.Spec.HasMultipleSources(),
|
||||
InstallationID: installationID,
|
||||
})
|
||||
if err != nil {
|
||||
return "", false, err
|
||||
}
|
||||
|
||||
// Generate manifests should use same revision as updateRevisionForPaths, because HEAD revision may be different between these two calls
|
||||
if updateRevisionResult.Revision != "" {
|
||||
revision = updateRevisionResult.Revision
|
||||
}
|
||||
|
||||
return revision, updateRevisionResult.Changes, nil
|
||||
}
|
||||
|
||||
// revisionsMayHaveChanges is set to true if at least one revision is not possible to be updated
|
||||
return revision, true, nil
|
||||
}
|
||||
|
||||
func unmarshalManifests(manifests []string) ([]*unstructured.Unstructured, error) {
|
||||
|
|
@ -543,10 +546,32 @@ func isManagedNamespace(ns *unstructured.Unstructured, app *v1alpha1.Application
|
|||
return ns != nil && ns.GetKind() == kubeutil.NamespaceKind && ns.GetName() == app.Spec.Destination.Namespace && app.Spec.SyncPolicy != nil && app.Spec.SyncPolicy.ManagedNamespaceMetadata != nil
|
||||
}
|
||||
|
||||
// partitionTargetObjsForSync returns the manifest subset passed to gitops-engine sync, and whether
|
||||
// the full manifest set declared PreDelete and/or PostDelete hooks (for finalizer handling).
|
||||
// Uses isPreDeleteHook / isPostDeleteHook / hasGitOpsEngineSyncPhaseHook from hook.go.
|
||||
func partitionTargetObjsForSync(targetObjs []*unstructured.Unstructured) (syncObjs []*unstructured.Unstructured, hasPreDeleteHooks, hasPostDeleteHooks bool) {
|
||||
for _, obj := range targetObjs {
|
||||
if isPreDeleteHook(obj) {
|
||||
hasPreDeleteHooks = true
|
||||
if !hasGitOpsEngineSyncPhaseHook(obj) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
if isPostDeleteHook(obj) {
|
||||
hasPostDeleteHooks = true
|
||||
if !hasGitOpsEngineSyncPhaseHook(obj) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
syncObjs = append(syncObjs, obj)
|
||||
}
|
||||
return syncObjs, hasPreDeleteHooks, hasPostDeleteHooks
|
||||
}
|
||||
|
||||
// CompareAppState compares application git state to the live app state, using the specified
|
||||
// revision and supplied source. If revision or overrides are empty, then compares against
|
||||
// revision and overrides in the app spec.
|
||||
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
|
||||
func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1alpha1.AppProject, revisions []string, sources []v1alpha1.ApplicationSource, noCache bool, noRevisionCache bool, localManifests []string, hasMultipleSources bool) (*comparisonResult, error) {
|
||||
ts := stats.NewTimingStats()
|
||||
logCtx := log.WithFields(applog.GetAppLogFields(app))
|
||||
|
||||
|
|
@ -770,24 +795,7 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
|||
}
|
||||
}
|
||||
}
|
||||
hasPreDeleteHooks := false
|
||||
hasPostDeleteHooks := false
|
||||
// Filter out PreDelete and PostDelete hooks from targetObjs since they should not be synced
|
||||
// as regular resources. They are only executed during deletion.
|
||||
var targetObjsForSync []*unstructured.Unstructured
|
||||
for _, obj := range targetObjs {
|
||||
if isPreDeleteHook(obj) {
|
||||
hasPreDeleteHooks = true
|
||||
// Skip PreDelete hooks - they are not synced, only executed during deletion
|
||||
continue
|
||||
}
|
||||
if isPostDeleteHook(obj) {
|
||||
hasPostDeleteHooks = true
|
||||
// Skip PostDelete hooks - they are not synced, only executed after deletion
|
||||
continue
|
||||
}
|
||||
targetObjsForSync = append(targetObjsForSync, obj)
|
||||
}
|
||||
targetObjsForSync, hasPreDeleteHooks, hasPostDeleteHooks := partitionTargetObjsForSync(targetObjs)
|
||||
|
||||
reconciliation := sync.Reconcile(targetObjsForSync, liveObjByKey, app.Spec.Destination.Namespace, infoProvider)
|
||||
ts.AddCheckpoint("live_ms")
|
||||
|
|
@ -842,9 +850,10 @@ func (m *appStateManager) CompareAppState(app *v1alpha1.Application, project *v1
|
|||
if err != nil {
|
||||
log.Errorf("CompareAppState error getting server side diff dry run applier: %s", err)
|
||||
conditions = append(conditions, v1alpha1.ApplicationCondition{Type: v1alpha1.ApplicationConditionUnknownError, Message: err.Error(), LastTransitionTime: &now})
|
||||
} else {
|
||||
defer cleanup()
|
||||
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
|
||||
}
|
||||
defer cleanup()
|
||||
diffConfigBuilder.WithServerSideDryRunner(diff.NewK8sServerSideDryRunner(applier))
|
||||
}
|
||||
|
||||
// enable structured merge diff if application syncs with server-side apply
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
package controller
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"os"
|
||||
|
|
@ -31,6 +32,7 @@ import (
|
|||
"github.com/argoproj/argo-cd/v3/controller/testdata"
|
||||
"github.com/argoproj/argo-cd/v3/pkg/apis/application/v1alpha1"
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient"
|
||||
"github.com/argoproj/argo-cd/v3/reposerver/apiclient/mocks"
|
||||
"github.com/argoproj/argo-cd/v3/test"
|
||||
)
|
||||
|
||||
|
|
@ -2040,6 +2042,61 @@ func TestCompareAppState_CallUpdateRevisionForPaths_ForMultiSource(t *testing.T)
|
|||
require.False(t, revisionsMayHaveChanges)
|
||||
}
|
||||
|
||||
func Test_GetRepoObjs_HydrateToAppPathNotExist(t *testing.T) {
|
||||
t.Parallel()
|
||||
t.Run("with hydrateTo: appends waiting message", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := newFakeApp()
|
||||
app.Spec.Source = nil
|
||||
app.Spec.SourceHydrator = &v1alpha1.SourceHydrator{
|
||||
DrySource: v1alpha1.DrySource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
TargetRevision: "main",
|
||||
Path: "apps/my-app",
|
||||
},
|
||||
SyncSource: v1alpha1.SyncSource{
|
||||
TargetBranch: "env/prod",
|
||||
Path: "env/prod/my-app",
|
||||
},
|
||||
HydrateTo: &v1alpha1.HydrateTo{
|
||||
TargetBranch: "env/prod-next",
|
||||
},
|
||||
}
|
||||
|
||||
ctrl := newFakeController(t.Context(), &fakeData{manifestResponse: &apiclient.ManifestResponse{}}, errors.New("env/prod/my-app: app path does not exist"))
|
||||
source := app.Spec.GetSource()
|
||||
|
||||
_, _, _, err := ctrl.appStateManager.GetRepoObjs(t.Context(), app, []v1alpha1.ApplicationSource{source}, "app", []string{""}, true, false, false, &defaultProj, false)
|
||||
require.ErrorContains(t, err, "app path does not exist")
|
||||
require.ErrorContains(t, err, "waiting for an external process to update env/prod from env/prod-next")
|
||||
})
|
||||
t.Run("without hydrateTo: no waiting message appended", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := newFakeApp()
|
||||
app.Spec.Source = nil
|
||||
app.Spec.SourceHydrator = &v1alpha1.SourceHydrator{
|
||||
DrySource: v1alpha1.DrySource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
TargetRevision: "main",
|
||||
Path: "apps/my-app",
|
||||
},
|
||||
SyncSource: v1alpha1.SyncSource{
|
||||
TargetBranch: "env/prod",
|
||||
Path: "env/prod/my-app",
|
||||
},
|
||||
}
|
||||
|
||||
ctrl := newFakeController(t.Context(), &fakeData{manifestResponse: &apiclient.ManifestResponse{}}, errors.New("env/prod/my-app: app path does not exist"))
|
||||
source := app.Spec.GetSource()
|
||||
|
||||
_, _, _, err := ctrl.appStateManager.GetRepoObjs(t.Context(), app, []v1alpha1.ApplicationSource{source}, "app", []string{""}, true, false, false, &defaultProj, false)
|
||||
require.ErrorContains(t, err, "app path does not exist")
|
||||
require.NotContains(t, err.Error(), "waiting for an external process")
|
||||
})
|
||||
}
|
||||
|
||||
func Test_isObjRequiresDeletionConfirmation(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
|
|
@ -2108,3 +2165,190 @@ func Test_isObjRequiresDeletionConfirmation(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_evaluateRevisionChanges(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
source *v1alpha1.ApplicationSource
|
||||
sourceType v1alpha1.ApplicationSourceType
|
||||
syncPolicy *v1alpha1.SyncPolicy
|
||||
revision string
|
||||
appSyncedRevision string
|
||||
refSources map[string]*v1alpha1.RefTarget
|
||||
repoDepth int64
|
||||
keyManifestGenerateAnnotationExists bool
|
||||
keyManifestGenerateAnnotationVal string
|
||||
updateRevisionForPathsResponse *apiclient.UpdateRevisionForPathsResponse
|
||||
expectedRevision string
|
||||
expectedHasChanges bool
|
||||
expectUpdateRevisionForPathsCalled bool
|
||||
}{
|
||||
{
|
||||
name: "Ref source returns early with no changes",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Ref: "main",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeHelm,
|
||||
revision: "abc123",
|
||||
appSyncedRevision: "def456",
|
||||
expectedRevision: "abc123",
|
||||
expectedHasChanges: false,
|
||||
},
|
||||
{
|
||||
name: "Same revision with no ref sources returns early",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Path: "manifests",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
|
||||
revision: "abc123",
|
||||
appSyncedRevision: "abc123",
|
||||
refSources: map[string]*v1alpha1.RefTarget{},
|
||||
expectedRevision: "abc123",
|
||||
expectedHasChanges: false,
|
||||
},
|
||||
{
|
||||
name: "Same revision with ref sources continues to evaluation",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Path: "manifests",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
|
||||
revision: "abc123",
|
||||
appSyncedRevision: "abc123",
|
||||
refSources: map[string]*v1alpha1.RefTarget{
|
||||
"ref1": {Repo: v1alpha1.Repository{Repo: "https://github.com/example/ref"}},
|
||||
},
|
||||
repoDepth: 0,
|
||||
keyManifestGenerateAnnotationExists: true,
|
||||
keyManifestGenerateAnnotationVal: ".",
|
||||
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{
|
||||
Revision: "abc123",
|
||||
Changes: false,
|
||||
},
|
||||
expectedRevision: "abc123",
|
||||
expectedHasChanges: false,
|
||||
expectUpdateRevisionForPathsCalled: true,
|
||||
},
|
||||
{
|
||||
name: "Shallow clone skips UpdateRevisionForPaths",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Path: "manifests",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
|
||||
syncPolicy: &v1alpha1.SyncPolicy{
|
||||
Automated: &v1alpha1.SyncPolicyAutomated{},
|
||||
},
|
||||
revision: "abc123",
|
||||
appSyncedRevision: "def456",
|
||||
repoDepth: 1,
|
||||
keyManifestGenerateAnnotationExists: true,
|
||||
keyManifestGenerateAnnotationVal: ".",
|
||||
expectedRevision: "abc123",
|
||||
expectedHasChanges: true,
|
||||
expectUpdateRevisionForPathsCalled: false,
|
||||
},
|
||||
{
|
||||
name: "Missing annotation skips UpdateRevisionForPaths",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Path: "manifests",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
|
||||
syncPolicy: &v1alpha1.SyncPolicy{
|
||||
Automated: &v1alpha1.SyncPolicyAutomated{},
|
||||
},
|
||||
revision: "abc123",
|
||||
appSyncedRevision: "def456",
|
||||
repoDepth: 0,
|
||||
keyManifestGenerateAnnotationExists: false,
|
||||
keyManifestGenerateAnnotationVal: "",
|
||||
expectedRevision: "abc123",
|
||||
expectedHasChanges: true,
|
||||
expectUpdateRevisionForPathsCalled: false,
|
||||
},
|
||||
{
|
||||
name: "UpdateRevisionForPaths returns updated revision",
|
||||
source: &v1alpha1.ApplicationSource{
|
||||
RepoURL: "https://github.com/example/repo",
|
||||
Path: "manifests",
|
||||
},
|
||||
sourceType: v1alpha1.ApplicationSourceTypeKustomize,
|
||||
syncPolicy: &v1alpha1.SyncPolicy{
|
||||
Automated: &v1alpha1.SyncPolicyAutomated{},
|
||||
},
|
||||
revision: "HEAD",
|
||||
appSyncedRevision: "def456",
|
||||
repoDepth: 0,
|
||||
keyManifestGenerateAnnotationExists: true,
|
||||
keyManifestGenerateAnnotationVal: ".",
|
||||
updateRevisionForPathsResponse: &apiclient.UpdateRevisionForPathsResponse{
|
||||
Revision: "abc123resolved",
|
||||
Changes: true,
|
||||
},
|
||||
expectedRevision: "abc123resolved",
|
||||
expectedHasChanges: true,
|
||||
expectUpdateRevisionForPathsCalled: true,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
app := newFakeApp()
|
||||
app.Spec.SyncPolicy = tt.syncPolicy
|
||||
app.Status.Sync.Revision = tt.appSyncedRevision
|
||||
app.Status.SourceType = tt.sourceType
|
||||
if tt.keyManifestGenerateAnnotationExists {
|
||||
app.Annotations = map[string]string{
|
||||
v1alpha1.AnnotationKeyManifestGeneratePaths: tt.keyManifestGenerateAnnotationVal,
|
||||
}
|
||||
}
|
||||
|
||||
repo := &v1alpha1.Repository{
|
||||
Repo: tt.source.RepoURL,
|
||||
Depth: tt.repoDepth,
|
||||
}
|
||||
|
||||
mockRepoClient := &mocks.RepoServerServiceClient{}
|
||||
if tt.expectUpdateRevisionForPathsCalled {
|
||||
mockRepoClient.On("UpdateRevisionForPaths", mock.Anything, mock.Anything).Return(tt.updateRevisionForPathsResponse, nil)
|
||||
}
|
||||
|
||||
mgr := &appStateManager{
|
||||
namespace: "test-namespace",
|
||||
}
|
||||
|
||||
resolvedRevision, hasChanges, err := mgr.evaluateRevisionChanges(
|
||||
context.Background(),
|
||||
mockRepoClient,
|
||||
app,
|
||||
tt.source,
|
||||
0, // sourceIndex
|
||||
repo,
|
||||
tt.revision,
|
||||
tt.refSources,
|
||||
nil,
|
||||
false,
|
||||
"app.kubernetes.io/instance",
|
||||
"v1.28.0",
|
||||
[]string{"v1"},
|
||||
"label",
|
||||
"test-installation",
|
||||
tt.keyManifestGenerateAnnotationExists,
|
||||
tt.keyManifestGenerateAnnotationVal,
|
||||
)
|
||||
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, tt.expectedRevision, resolvedRevision)
|
||||
assert.Equal(t, tt.expectedHasChanges, hasChanges)
|
||||
|
||||
if tt.expectUpdateRevisionForPathsCalled {
|
||||
mockRepoClient.AssertExpectations(t)
|
||||
} else {
|
||||
mockRepoClient.AssertNotCalled(t, "UpdateRevisionForPaths")
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ import (
|
|||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"k8s.io/apimachinery/pkg/util/strategicpatch"
|
||||
|
|
@ -33,20 +32,16 @@ import (
|
|||
applog "github.com/argoproj/argo-cd/v3/util/app/log"
|
||||
"github.com/argoproj/argo-cd/v3/util/argo"
|
||||
"github.com/argoproj/argo-cd/v3/util/argo/diff"
|
||||
"github.com/argoproj/argo-cd/v3/util/glob"
|
||||
kubeutil "github.com/argoproj/argo-cd/v3/util/kube"
|
||||
logutils "github.com/argoproj/argo-cd/v3/util/log"
|
||||
"github.com/argoproj/argo-cd/v3/util/lua"
|
||||
"github.com/argoproj/argo-cd/v3/util/settings"
|
||||
)
|
||||
|
||||
const (
|
||||
// EnvVarSyncWaveDelay is an environment variable which controls the delay in seconds between
|
||||
// each sync-wave
|
||||
EnvVarSyncWaveDelay = "ARGOCD_SYNC_WAVE_DELAY"
|
||||
|
||||
// serviceAccountDisallowedCharSet contains the characters that are not allowed to be present
|
||||
// in a DefaultServiceAccount configured for a DestinationServiceAccount
|
||||
serviceAccountDisallowedCharSet = "!*[]{}\\/"
|
||||
)
|
||||
|
||||
func (m *appStateManager) getOpenAPISchema(server *v1alpha1.Cluster) (openapi.Resources, error) {
|
||||
|
|
@ -288,7 +283,7 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
|||
return
|
||||
}
|
||||
if impersonationEnabled {
|
||||
serviceAccountToImpersonate, err := deriveServiceAccountToImpersonate(project, app, destCluster)
|
||||
serviceAccountToImpersonate, err := settings.DeriveServiceAccountToImpersonate(project, app, destCluster)
|
||||
if err != nil {
|
||||
state.Phase = common.OperationError
|
||||
state.Message = fmt.Sprintf("failed to find a matching service account to impersonate: %v", err)
|
||||
|
|
@ -308,22 +303,9 @@ func (m *appStateManager) SyncAppState(app *v1alpha1.Application, project *v1alp
|
|||
sync.WithLogr(logutils.NewLogrusLogger(logEntry)),
|
||||
sync.WithHealthOverride(lua.ResourceHealthOverrides(resourceOverrides)),
|
||||
sync.WithPermissionValidator(func(un *unstructured.Unstructured, res *metav1.APIResource) error {
|
||||
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
|
||||
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
|
||||
}
|
||||
if res.Namespaced {
|
||||
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), func(project string) ([]*v1alpha1.Cluster, error) {
|
||||
return m.db.GetProjectClusters(context.TODO(), project)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !permitted {
|
||||
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
return validateSyncPermissions(project, destCluster, func(proj string) ([]*v1alpha1.Cluster, error) {
|
||||
return m.db.GetProjectClusters(context.TODO(), proj)
|
||||
}, un, res)
|
||||
}),
|
||||
sync.WithOperationSettings(syncOp.DryRun, syncOp.Prune, syncOp.SyncStrategy.Force(), syncOp.IsApplyStrategy() || len(syncOp.Resources) > 0),
|
||||
sync.WithInitialState(state.Phase, state.Message, initialResourcesRes, state.StartedAt),
|
||||
|
|
@ -560,10 +542,15 @@ func delayBetweenSyncWaves(_ common.SyncPhase, _ int, finalWave bool) error {
|
|||
func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject) (bool, error) {
|
||||
window := proj.Spec.SyncWindows.Matches(app)
|
||||
isManual := false
|
||||
var operationStartTime *time.Time
|
||||
if app.Status.OperationState != nil {
|
||||
isManual = !app.Status.OperationState.Operation.InitiatedBy.Automated
|
||||
if !app.Status.OperationState.StartedAt.IsZero() {
|
||||
t := app.Status.OperationState.StartedAt.Time
|
||||
operationStartTime = &t
|
||||
}
|
||||
}
|
||||
canSync, err := window.CanSync(isManual)
|
||||
canSync, err := window.CanSync(isManual, operationStartTime)
|
||||
if err != nil {
|
||||
// prevents sync because sync window has an error
|
||||
return true, err
|
||||
|
|
@ -571,37 +558,32 @@ func syncWindowPreventsSync(app *v1alpha1.Application, proj *v1alpha1.AppProject
|
|||
return !canSync, nil
|
||||
}
|
||||
|
||||
// deriveServiceAccountToImpersonate determines the service account to be used for impersonation for the sync operation.
|
||||
// The returned service account will be fully qualified including namespace and the service account name in the format system:serviceaccount:<namespace>:<service_account>
|
||||
func deriveServiceAccountToImpersonate(project *v1alpha1.AppProject, application *v1alpha1.Application, destCluster *v1alpha1.Cluster) (string, error) {
|
||||
// spec.Destination.Namespace is optional. If not specified, use the Application's
|
||||
// namespace
|
||||
serviceAccountNamespace := application.Spec.Destination.Namespace
|
||||
if serviceAccountNamespace == "" {
|
||||
serviceAccountNamespace = application.Namespace
|
||||
// validateSyncPermissions checks whether the given resource is permitted by the project's
|
||||
// allow/deny lists and destination rules. It returns an error if the API resource info is nil
|
||||
// (preventing a nil-pointer panic), if the resource's group/kind is not permitted, or if
|
||||
// the resource's namespace is not an allowed destination.
|
||||
func validateSyncPermissions(
|
||||
project *v1alpha1.AppProject,
|
||||
destCluster *v1alpha1.Cluster,
|
||||
getProjectClusters func(string) ([]*v1alpha1.Cluster, error),
|
||||
un *unstructured.Unstructured,
|
||||
res *metav1.APIResource,
|
||||
) error {
|
||||
if res == nil {
|
||||
return fmt.Errorf("failed to get API resource info for %s/%s: unable to verify permissions", un.GroupVersionKind().Group, un.GroupVersionKind().Kind)
|
||||
}
|
||||
// Loop through the destinationServiceAccounts and see if there is any destination that is a candidate.
|
||||
// if so, return the service account specified for that destination.
|
||||
for _, item := range project.Spec.DestinationServiceAccounts {
|
||||
dstServerMatched, err := glob.MatchWithError(item.Server, destCluster.Server)
|
||||
if !project.IsGroupKindNamePermitted(un.GroupVersionKind().GroupKind(), un.GetName(), res.Namespaced) {
|
||||
return fmt.Errorf("resource %s:%s is not permitted in project %s", un.GroupVersionKind().Group, un.GroupVersionKind().Kind, project.Name)
|
||||
}
|
||||
if res.Namespaced {
|
||||
permitted, err := project.IsDestinationPermitted(destCluster, un.GetNamespace(), getProjectClusters)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("invalid glob pattern for destination server: %w", err)
|
||||
return err
|
||||
}
|
||||
dstNamespaceMatched, err := glob.MatchWithError(item.Namespace, application.Spec.Destination.Namespace)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("invalid glob pattern for destination namespace: %w", err)
|
||||
}
|
||||
if dstServerMatched && dstNamespaceMatched {
|
||||
if strings.Trim(item.DefaultServiceAccount, " ") == "" || strings.ContainsAny(item.DefaultServiceAccount, serviceAccountDisallowedCharSet) {
|
||||
return "", fmt.Errorf("default service account contains invalid chars '%s'", item.DefaultServiceAccount)
|
||||
} else if strings.Contains(item.DefaultServiceAccount, ":") {
|
||||
// service account is specified along with its namespace.
|
||||
return "system:serviceaccount:" + item.DefaultServiceAccount, nil
|
||||
}
|
||||
// service account needs to be prefixed with a namespace
|
||||
return fmt.Sprintf("system:serviceaccount:%s:%s", serviceAccountNamespace, item.DefaultServiceAccount), nil
|
||||
|
||||
if !permitted {
|
||||
return fmt.Errorf("namespace %v is not permitted in project '%s'", un.GetNamespace(), project.Name)
|
||||
}
|
||||
}
|
||||
// if there is no match found in the AppProject.Spec.DestinationServiceAccounts, use the default service account of the destination namespace.
|
||||
return "", fmt.Errorf("no matching service account found for destination server %s and namespace %s", application.Spec.Destination.Server, serviceAccountNamespace)
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ import (
|
|||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
"k8s.io/apimachinery/pkg/runtime/schema"
|
||||
|
||||
"github.com/argoproj/argo-cd/v3/common"
|
||||
"github.com/argoproj/argo-cd/v3/controller/testdata"
|
||||
|
|
@ -21,6 +22,7 @@ import (
|
|||
"github.com/argoproj/argo-cd/v3/test"
|
||||
"github.com/argoproj/argo-cd/v3/util/argo/diff"
|
||||
"github.com/argoproj/argo-cd/v3/util/argo/normalizers"
|
||||
"github.com/argoproj/argo-cd/v3/util/settings"
|
||||
)
|
||||
|
||||
func TestPersistRevisionHistory(t *testing.T) {
|
||||
|
|
@ -725,7 +727,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
assert.Equal(t, expectedSA, sa)
|
||||
|
||||
// then, there should be an error saying no valid match was found
|
||||
|
|
@ -749,7 +751,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should be no error and should use the right service account for impersonation
|
||||
require.NoError(t, err)
|
||||
|
|
@ -788,7 +790,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should be no error and should use the right service account for impersonation
|
||||
require.NoError(t, err)
|
||||
|
|
@ -827,7 +829,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should be no error and it should use the first matching service account for impersonation
|
||||
require.NoError(t, err)
|
||||
|
|
@ -861,7 +863,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and should use the first matching glob pattern service account for impersonation
|
||||
require.NoError(t, err)
|
||||
|
|
@ -896,7 +898,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should be an error saying no match was found
|
||||
require.EqualError(t, err, expectedErrMsg)
|
||||
|
|
@ -924,7 +926,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and the service account configured for with empty namespace should be used.
|
||||
require.NoError(t, err)
|
||||
|
|
@ -958,7 +960,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and the catch all service account should be returned
|
||||
require.NoError(t, err)
|
||||
|
|
@ -982,7 +984,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there must be an error as the glob pattern is invalid.
|
||||
require.ErrorContains(t, err, "invalid glob pattern for destination namespace")
|
||||
|
|
@ -1016,7 +1018,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
assert.Equal(t, expectedSA, sa)
|
||||
|
||||
// then, there should not be any error and the service account with its namespace should be returned.
|
||||
|
|
@ -1044,7 +1046,7 @@ func TestDeriveServiceAccountMatchingNamespaces(t *testing.T) {
|
|||
f.application.Spec.Destination.Name = f.cluster.Name
|
||||
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
assert.Equal(t, expectedSA, sa)
|
||||
|
||||
// then, there should not be any error and the service account with its namespace should be returned.
|
||||
|
|
@ -1127,7 +1129,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and the right service account must be returned.
|
||||
require.NoError(t, err)
|
||||
|
|
@ -1166,7 +1168,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and first matching service account should be used
|
||||
require.NoError(t, err)
|
||||
|
|
@ -1200,7 +1202,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
assert.Equal(t, expectedSA, sa)
|
||||
|
||||
// then, there should not be any error and the service account of the glob pattern, being the first match should be returned.
|
||||
|
|
@ -1235,7 +1237,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
|
||||
|
||||
// then, there an error with appropriate message must be returned
|
||||
require.EqualError(t, err, expectedErr)
|
||||
|
|
@ -1269,7 +1271,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there should not be any error and the service account of the glob pattern match must be returned.
|
||||
require.NoError(t, err)
|
||||
|
|
@ -1293,7 +1295,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
|
||||
// then, there must be an error as the glob pattern is invalid.
|
||||
require.ErrorContains(t, err, "invalid glob pattern for destination server")
|
||||
|
|
@ -1327,7 +1329,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
|
||||
f := setup(destinationServiceAccounts, destinationNamespace, destinationServerURL, applicationNamespace)
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, &v1alpha1.Cluster{Server: destinationServerURL})
|
||||
|
||||
// then, there should not be any error and the service account with the given namespace prefix must be returned.
|
||||
require.NoError(t, err)
|
||||
|
|
@ -1355,7 +1357,7 @@ func TestDeriveServiceAccountMatchingServers(t *testing.T) {
|
|||
f.application.Spec.Destination.Name = f.cluster.Name
|
||||
|
||||
// when
|
||||
sa, err := deriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
sa, err := settings.DeriveServiceAccountToImpersonate(f.project, f.application, f.cluster)
|
||||
assert.Equal(t, expectedSA, sa)
|
||||
|
||||
// then, there should not be any error and the service account with its namespace should be returned.
|
||||
|
|
@ -1653,3 +1655,116 @@ func dig(obj any, path ...any) any {
|
|||
|
||||
return i
|
||||
}
|
||||
|
||||
func TestValidateSyncPermissions(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
newResource := func(group, kind, name, namespace string) *unstructured.Unstructured {
|
||||
obj := &unstructured.Unstructured{}
|
||||
obj.SetGroupVersionKind(schema.GroupVersionKind{Group: group, Version: "v1", Kind: kind})
|
||||
obj.SetName(name)
|
||||
obj.SetNamespace(namespace)
|
||||
return obj
|
||||
}
|
||||
|
||||
project := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-project",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
Destinations: []v1alpha1.ApplicationDestination{
|
||||
{Namespace: "default", Server: "*"},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
destCluster := &v1alpha1.Cluster{
|
||||
Server: "https://kubernetes.default.svc",
|
||||
}
|
||||
|
||||
noopGetClusters := func(_ string) ([]*v1alpha1.Cluster, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
t.Run("nil APIResource returns error", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
un := newResource("apps", "Deployment", "my-deploy", "default")
|
||||
|
||||
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, nil)
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "failed to get API resource info for apps/Deployment")
|
||||
assert.Contains(t, err.Error(), "unable to verify permissions")
|
||||
})
|
||||
|
||||
t.Run("permitted namespaced resource returns no error", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
un := newResource("", "ConfigMap", "my-cm", "default")
|
||||
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
|
||||
|
||||
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
|
||||
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("group kind not permitted returns error", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
projectWithDenyList := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "restricted-project",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
Destinations: []v1alpha1.ApplicationDestination{
|
||||
{Namespace: "*", Server: "*"},
|
||||
},
|
||||
ClusterResourceBlacklist: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "rbac.authorization.k8s.io", Kind: "ClusterRole"},
|
||||
},
|
||||
},
|
||||
}
|
||||
un := newResource("rbac.authorization.k8s.io", "ClusterRole", "my-role", "")
|
||||
res := &metav1.APIResource{Name: "clusterroles", Namespaced: false}
|
||||
|
||||
err := validateSyncPermissions(projectWithDenyList, destCluster, noopGetClusters, un, res)
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "is not permitted in project")
|
||||
})
|
||||
|
||||
t.Run("namespace not permitted returns error", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
un := newResource("", "ConfigMap", "my-cm", "kube-system")
|
||||
res := &metav1.APIResource{Name: "configmaps", Namespaced: true}
|
||||
|
||||
err := validateSyncPermissions(project, destCluster, noopGetClusters, un, res)
|
||||
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "namespace kube-system is not permitted in project")
|
||||
})
|
||||
|
||||
t.Run("cluster-scoped resource skips namespace check", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
projectWithClusterResources := &v1alpha1.AppProject{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "test-project",
|
||||
Namespace: "argocd",
|
||||
},
|
||||
Spec: v1alpha1.AppProjectSpec{
|
||||
Destinations: []v1alpha1.ApplicationDestination{
|
||||
{Namespace: "default", Server: "*"},
|
||||
},
|
||||
ClusterResourceWhitelist: []v1alpha1.ClusterResourceRestrictionItem{
|
||||
{Group: "*", Kind: "*"},
|
||||
},
|
||||
},
|
||||
}
|
||||
un := newResource("", "Namespace", "my-ns", "")
|
||||
res := &metav1.APIResource{Name: "namespaces", Namespaced: false}
|
||||
|
||||
err := validateSyncPermissions(projectWithClusterResources, destCluster, noopGetClusters, un, res)
|
||||
|
||||
assert.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
|
|
|||
Binary file not shown.
|
Before Width: | Height: | Size: 3 MiB After Width: | Height: | Size: 23 MiB |
BIN
docs/assets/ghcr-package-event.png
Normal file
BIN
docs/assets/ghcr-package-event.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 11 KiB |
BIN
docs/assets/repo-add-azure-service-principal.png
Normal file
BIN
docs/assets/repo-add-azure-service-principal.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 75 KiB |
|
|
@ -38,23 +38,23 @@ and others. Although you can make changes to these files and run them locally, i
|
|||
|
||||
1. Fork and clone the [Argo UI repository](https://github.com/argoproj/argo-ui).
|
||||
|
||||
2. `cd` into your `argo-ui` directory, and then run `yarn install`.
|
||||
2. `cd` into your `argo-ui` directory, and then run `pnpm install`.
|
||||
|
||||
3. Make your file changes.
|
||||
|
||||
4. Run `yarn start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
|
||||
4. Run `pnpm start` to start a [storybook](https://storybook.js.org/) dev server and view the components in your browser. Make sure all your changes work as expected.
|
||||
|
||||
5. Use [yarn link](https://classic.yarnpkg.com/en/docs/cli/link/) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
|
||||
5. Use [pnpm link](https://pnpm.io/cli/link) to link Argo UI package to your Argo CD repository. (Commands below assume that `argo-ui` and `argo-cd` are both located within the same parent folder)
|
||||
|
||||
* `cd argo-ui`
|
||||
* `yarn link`
|
||||
* `pnpm link`
|
||||
* `cd ../argo-cd/ui`
|
||||
* `yarn link argo-ui`
|
||||
* `pnpm link argo-ui`
|
||||
|
||||
Once the `argo-ui` package has been successfully linked, test changes in your local development environment.
|
||||
|
||||
6. Commit changes and open a PR to [Argo UI](https://github.com/argoproj/argo-ui).
|
||||
|
||||
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `yarn add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/yarn.lock` file to use the latest master commit for argo-ui.
|
||||
7. Once your PR has been merged in Argo UI, `cd` into your `argo-cd/ui` folder and run `pnpm add git+https://github.com/argoproj/argo-ui.git`. This will update the commit SHA in the `ui/pnpm-lock.yaml` file to use the latest master commit for argo-ui.
|
||||
|
||||
8. Submit changes to `ui/yarn.lock`in a PR to Argo CD.
|
||||
8. Submit changes to `ui/pnpm-lock.yaml` in a PR to Argo CD.
|
||||
|
|
|
|||
|
|
@ -23,12 +23,37 @@ All following commands in this guide assume the namespace is already set.
|
|||
kubectl config set-context --current --namespace=argocd
|
||||
```
|
||||
|
||||
### Pull in all build dependencies
|
||||
### Pull in all UI build dependencies
|
||||
|
||||
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required dependencies, issue:
|
||||
As build dependencies change over time, you have to synchronize your development environment with the current specification. In order to pull in all required UI dependencies (NPM packages), issue:
|
||||
|
||||
* `make dep-ui` or `make dep-ui-local`
|
||||
|
||||
These commands run `pnpm install --frozen-lockfile` command, which only brings package versions that are defined in the `pnpm-lock.yaml` file without trying to resolve and download new package versions.
|
||||
|
||||
### Updating UI build dependencies
|
||||
|
||||
If you need to add new UI dependencies or update existing ones you need
|
||||
to run a `pnpm` command in the ./ui directory to resolve and download new packages.
|
||||
|
||||
You can run it in the docker container using the `make run-pnpm` make target.
|
||||
|
||||
For example, to add new dependency `newpackage` you may run command like
|
||||
|
||||
```shell
|
||||
make run-pnpm PNPM_COMMAND="add newpackage --ignore-scripts"
|
||||
```
|
||||
|
||||
To upgrade an existing package:
|
||||
|
||||
```shell
|
||||
make run-pnpm PNPM_COMMAND="update existingpackage@1.0.2 --ignore-scripts"
|
||||
```
|
||||
|
||||
Please consider using best security practices when adding or upgrading
|
||||
NPM dependencies, such as this
|
||||
[guide](https://github.com/lirantal/npm-security-best-practices/blob/main/README.md).
|
||||
|
||||
### Generate API glue code and other assets
|
||||
|
||||
Argo CD relies on Google's [Protocol Buffers](https://developers.google.com/protocol-buffers) for its API, and this makes heavy use of auto-generated glue code and stubs. Whenever you touched parts of the API code, you must re-generate the auto generated code.
|
||||
|
|
@ -60,7 +85,7 @@ The Linter might make some automatic changes to your code, such as indentation f
|
|||
* Finally, after the Linter reports no errors, run `git status` or `git diff` to check for any changes made automatically by Lint
|
||||
* If there were automatic changes, commit them to your local branch
|
||||
|
||||
If you touched UI code, you should also run the Yarn linter on it:
|
||||
If you touched UI code, you should also run the linter on it:
|
||||
|
||||
* Run `make lint-ui` or `make lint-ui-local`
|
||||
* Fix any of the errors reported by it
|
||||
|
|
|
|||
|
|
@ -21,8 +21,8 @@ These are the upcoming releases dates:
|
|||
| v3.1 | Monday, Jun. 16, 2025 | Monday, Aug. 4, 2025 | [Christian Hernandez](https://github.com/christianh814) | [Alexandre Gaudreault](https://github.com/agaudreault) | [checklist](https://github.com/argoproj/argo-cd/issues/23347) |
|
||||
| v3.2 | Monday, Sep. 15, 2025 | Monday, Nov. 3, 2025 | [Nitish Kumar](https://github.com/nitishfy) | [Michael Crenshaw](https://github.com/crenshaw-dev) | [checklist](https://github.com/argoproj/argo-cd/issues/24539) |
|
||||
| v3.3 | Monday, Dec. 15, 2025 | Monday, Feb. 2, 2026 | [Peter Jiang](https://github.com/pjiang-dev) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/25211) |
|
||||
| v3.4 | Monday, Mar. 16, 2026 | Monday, May. 4, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
|
||||
| v3.5 | Monday, Jun. 15, 2026 | Monday, Aug. 3, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
|
||||
| v3.4 | Monday, Mar. 16, 2026 | Tuesday, May. 5, 2026 | [Codey Jenkins](https://github.com/FourFifthsCode) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26527) |
|
||||
| v3.5 | Tuesday, Jun. 16, 2026 | Tuesday, Aug. 4, 2026 | [Patroklos Papapetrou](https://github.com/ppapapetrou76) | [Regina Voloshin](https://github.com/reggie-k) | [checklist](https://github.com/argoproj/argo-cd/issues/26746) |
|
||||
|
||||
Actual release dates might differ from the plan by a few days.
|
||||
|
||||
|
|
@ -36,10 +36,10 @@ effectively means that there is a seven-week feature freeze.
|
|||
|
||||
These are the approximate release dates:
|
||||
|
||||
* The first Monday of February
|
||||
* The first Monday of May
|
||||
* The first Monday of August
|
||||
* The first Monday of November
|
||||
* The first Tuesday of February
|
||||
* The first Tuesday of May
|
||||
* The first Tuesday of August
|
||||
* The first Tuesday of November
|
||||
|
||||
Dates may be shifted slightly to accommodate holidays. Those shifts should be minimal.
|
||||
|
||||
|
|
@ -86,6 +86,7 @@ CVEs in Argo CD code will be patched for all supported versions. Read more about
|
|||
Dependencies are evaluated before being introduced to ensure they:
|
||||
|
||||
1) are actively maintained
|
||||
|
||||
2) are maintained by trustworthy maintainers
|
||||
|
||||
These evaluations vary from dependency to dependencies.
|
||||
|
|
|
|||
|
|
@ -98,11 +98,15 @@ checks to see if the release came out correctly:
|
|||
|
||||
### If something went wrong
|
||||
|
||||
If something went wrong, damage should be limited. Depending on the steps that
|
||||
have been performed, you will need to manually clean up.
|
||||
A new Argo CD release results in:
|
||||
- A new GitHub release created
|
||||
- Stable Git tag pointing to the release (if the release is the latest release)
|
||||
- The release Go packages are published for using Argo CD code as dependency
|
||||
- Docker images and SBOM artifacts are published
|
||||
|
||||
* If the container image has been pushed to Quay.io, delete it
|
||||
* Delete the release (if created) from the `Releases` page on GitHub
|
||||
Because of all the above dependencies, in a case of a release that failed, it is not safe to delete and recreate it.
|
||||
Instead, create the next patch release (for example, if `3.2.4` failed, create `3.2.5` after fixing the problem, but don't recreate `3.2.4`).
|
||||
Upon successful publishing of the fixed release (3.2.5 in our example), copy the full release notes manually from the failed release (3.2.4 in our example) and then update the failed release (3.2.4 in our example) release notes to state this release is invalid and should not be used.
|
||||
|
||||
### Manual releasing
|
||||
|
||||
|
|
|
|||
|
|
@ -212,7 +212,7 @@ export IMAGE_TAG=1.5.0-myrc
|
|||
|
||||
> [!NOTE]
|
||||
> The image will be built for `linux/amd64` platform by default. If you are running on Mac with Apple chip (ARM),
|
||||
> you need to specify the correct buld platform by running:
|
||||
> you need to specify the correct build platform by running:
|
||||
> ```bash
|
||||
> export TARGET_ARCH=linux/arm64
|
||||
> ```
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
# Submitting PRs
|
||||
|
||||
## Prerequisites
|
||||
1. [Development Environment](development-environment.md)
|
||||
|
||||
1. [Development Environment](development-environment.md)
|
||||
2. [Toolchain Guide](toolchain-guide.md)
|
||||
3. [Development Cycle](development-cycle.md)
|
||||
|
||||
|
|
@ -10,7 +11,7 @@
|
|||
> [!NOTE]
|
||||
> **Before you start**
|
||||
>
|
||||
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the code base.
|
||||
> The Argo CD project continuously grows, both in terms of features and community size. It gets adopted by more and more organizations which entrust Argo CD to handle their critical production workloads. Thus, we need to take great care with any changes that affect compatibility, performance, scalability, stability and security of Argo CD. For this reason, every new feature or larger enhancement must be properly designed and discussed before it gets accepted into the codebase.
|
||||
>
|
||||
> We do welcome and encourage everyone to participate in the Argo CD project, but please understand that we can't accept each and every contribution from the community, for various reasons. If you want to submit code for a great new feature or enhancement, we kindly ask you to take a look at the
|
||||
> [code contribution guide](code-contributions.md#) before you start to write code or submit a PR.
|
||||
|
|
@ -21,10 +22,10 @@ If you need guidance with submitting a PR, or have any other questions regarding
|
|||
|
||||
## Before Submitting a PR
|
||||
|
||||
1. Rebase your branch against upstream main:
|
||||
1. Rebase your branch against upstream master:
|
||||
```shell
|
||||
git fetch upstream
|
||||
git rebase upstream/main
|
||||
git rebase upstream/master
|
||||
```
|
||||
|
||||
2. Run pre-commit checks:
|
||||
|
|
@ -39,9 +40,9 @@ When you submit a PR against Argo CD's GitHub repository, a couple of CI checks
|
|||
> [!NOTE]
|
||||
> Please make sure that you always create PRs from a branch that is up-to-date with the latest changes from Argo CD's master branch. Depending on how long it takes for the maintainers to review and merge your PR, it might be necessary to pull in latest changes into your branch again.
|
||||
|
||||
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer as expected.
|
||||
Please understand that we, as an Open Source project, have limited capacities for reviewing and merging PRs to Argo CD. We will do our best to review your PR and give you feedback as soon as possible, but please bear with us if it takes a little longer than expected.
|
||||
|
||||
The following read will help you to submit a PR that meets the standards of our CI tests:
|
||||
The following guide will help you to submit a PR that meets the standards of our CI tests:
|
||||
|
||||
## Title of the PR
|
||||
|
||||
|
|
@ -56,6 +57,7 @@ We use [PR title checker](https://github.com/marketplace/actions/pr-title-checke
|
|||
* `docs` - Your PR improves the documentation
|
||||
* `chore` - Your PR improves any internals of Argo CD, such as the build process, unit tests, etc
|
||||
* `refactor` - Your PR refactors the code base, without adding new features or fixing bugs
|
||||
* `revert` - Your PR reverts a previous commit
|
||||
|
||||
Please prefix the title of your PR with one of the valid categories. For example, if you chose the title your PR `Add documentation for GitHub SSO integration`, please use `docs: Add documentation for GitHub SSO integration` instead.
|
||||
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ The Makefile's `start-e2e` target starts instances of ArgoCD on your local machi
|
|||
- `ARGOCD_E2E_REPOSERVER_PORT`: Listener port for `argocd-reposerver` (default: `8081`)
|
||||
- `ARGOCD_E2E_DEX_PORT`: Listener port for `dex` (default: `5556`)
|
||||
- `ARGOCD_E2E_REDIS_PORT`: Listener port for `redis` (default: `6379`)
|
||||
- `ARGOCD_E2E_YARN_CMD`: Command to use for starting the UI via Yarn (default: `yarn`)
|
||||
- `ARGOCD_E2E_PNPM_CMD`: Command to use for starting the UI via pnpm (default: `pnpm`)
|
||||
- `ARGOCD_E2E_DIR`: Local path to the repository to use for ephemeral test data
|
||||
|
||||
If you have changed the port for `argocd-server`, be sure to also set `ARGOCD_SERVER` environment variable to point to that port, e.g. `export ARGOCD_SERVER=localhost:8888` before running `make test-e2e` so that the test will communicate to the correct server component.
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue