mirror of
https://github.com/argoproj/argo-cd
synced 2026-04-21 17:07:16 +00:00
Update release notes for v0.11 and add more documentation (#883)
This commit is contained in:
parent
974ab11b76
commit
0a7d14040d
9 changed files with 252 additions and 169 deletions
96
CHANGELOG.md
96
CHANGELOG.md
|
|
@ -1,5 +1,101 @@
|
|||
# Changelog
|
||||
|
||||
## v0.11.0
|
||||
This is Argo CD's biggest release ever and introduces a completely redesigned controller architecture.
|
||||
|
||||
### New Features
|
||||
|
||||
#### New application controller architecture
|
||||
The application controller has a completely redesigned architecture for better scalability, and
|
||||
improve performance during application reconciliation. This was achieved by maintaining an
|
||||
in-memory, live state cache of lightweight Kubernetes object metadata. During reconciliation, the
|
||||
controller no longer performs expensive, in-line queries of app labeled resources in K8s API server,
|
||||
instead relying on the metadata in the local state cache. This dramatically improves performance
|
||||
and responsiveness, and is less burdensome the K8s API server. A second benefit to this, is that the
|
||||
relationship between object when computing the resource tree, can be displayed, even for custom
|
||||
resources.
|
||||
|
||||
#### Multi-namespaced applications
|
||||
Argo CD will now honor any explicitly set namespace in a mainfest. Resources without a namespace
|
||||
will continue to be deployed to the namespace specified in `spec.destination.namespace`. This
|
||||
enables support for a class of applications that install to multiple namespaces. For example,
|
||||
Argo CD now supports the istio helm chart, which deploys some resources to an explit `istio-system`
|
||||
namespace.
|
||||
|
||||
#### Large application support
|
||||
Full resource objects are no longer stored in the Application CRD object status. Instead, only
|
||||
lightweight metadata is stored in the status, such as a resource's sync and health status.
|
||||
This change enables Argo CD to support applications with a very large number of resources
|
||||
(e.g. istio), and reduces the bandwidth requirements when listing applications in the UI.
|
||||
|
||||
#### Resource lifecycle hook improvements
|
||||
Resource hooks are now visible from the UI. Additionally, bare Pods with a restart policy of Never
|
||||
can now be used as a resource hook, as an alternative to Jobs, Workflows.
|
||||
|
||||
#### K8s recommended application labels
|
||||
Resource labeling has been changed to use `app.kubernetes.io/instance` as recommended in
|
||||
[Kubernetes recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/),
|
||||
(changed from `applications.argoproj.io/app-name`). This will enable applications created by Argo CD
|
||||
to interoperate with other tooling that are also converging on this labeling, such as the Kubernetes
|
||||
dashboard. Additionally, Argo CD will no longer inject any tracking labels at the
|
||||
`spec.template.metadata` level.
|
||||
|
||||
#### External OIDC provider support
|
||||
Argo CD now supports auth delegation to an existing, external OIDC providers without the need for
|
||||
Dex (e.g. Okta, OneLogin, Auth0, Microsoft, etc...)
|
||||
|
||||
The optional, [Dex IDP OIDC provider](https://github.com/dexidp/dex) is still bundled as part of the
|
||||
default installation, in order to provide a seamless out-of-box experience, and enables Argo CD to
|
||||
integrate with non-OIDC providers, or to benefit from Dex's full range of
|
||||
[connectors](https://github.com/dexidp/dex/tree/master/Documentation/connectors).
|
||||
|
||||
#### OIDC group claims bindings to Project Roles
|
||||
Group claims from the OIDC provider can now be bound to Argo CD project roles. Previously, group
|
||||
claims were managed at the centralized ConfigMap, `argocd-rbac-cm`. This enables project admins to
|
||||
self service access to applications within a project.
|
||||
|
||||
#### Declarative Argo CD configuration
|
||||
Argo CD settings can be now be configured either declaratively, or imperatively. The `argocd-cm`
|
||||
ConfigMap now has a `repositories` field, which can reference credentials in a normal Kubernetes
|
||||
secret which you can create declaratively, outside of Argo CD.
|
||||
|
||||
#### Helm repository support
|
||||
Helm repositories can be configured at the system level, enabling the deployment of helm charts
|
||||
which have a dependency to external helm repositories.
|
||||
|
||||
### Breaking changes:
|
||||
|
||||
* As a consequence to moving to recommended kubernetes labels, when upgrading from v0.10 to v0.11,
|
||||
all applications will immediately be OutOfSync due to the change in labeling techniques. This will
|
||||
correct itself with another sync of the application. However, since Pods will be recreated, please
|
||||
take this into consideration, especially if your applications is configured with auto-sync.
|
||||
|
||||
* There was significant reworking of the `app.status` fields to simplify the datastructure and
|
||||
remove fields which were no longer used by the controller. No breaking changes were made in
|
||||
`app.spec`.
|
||||
|
||||
* An older Argo CD CLI (v0.10 and below) will not be compatible with an Argo CD v0.11. To keep
|
||||
CI pipelines in sync with the API server, it is recommended to have pipelines download the CLI
|
||||
directly from the API server https://${ARGOCD_SERVER}/download/argocd-linux-amd64 during the CI
|
||||
pipeline.
|
||||
|
||||
### Changes since v0.10:
|
||||
+ Declarative setup and configuration of ArgoCD (#536)
|
||||
+ Declaratively add helm repositories (#747)
|
||||
+ Switch to k8s recommended app.kubernetes.io/instance label (#857)
|
||||
+ Ability for a single application to deploy into multiple namespaces (#696)
|
||||
+ Self service group access to project applications (#742)
|
||||
+ Support for Pods as a sync hook (#801)
|
||||
+ Support 'crd-install' helm hook (#355)
|
||||
* Remove resources state from application CRD (#758)
|
||||
* Refactor, consolidate and rename resource type data structures
|
||||
* Improve Application state reconciliation performance (#806)
|
||||
* API server & UI should serve argocd binaries instead of linking to GitHub (#716)
|
||||
- Failed to deploy helm chart with local dependencies and no internet access (#786)
|
||||
- Out of sync reported if Secrets with stringData are used (#763)
|
||||
- Unable to delete application in K8s v1.12 (#718)
|
||||
|
||||
|
||||
## v0.10.6 (2018-11-14)
|
||||
- Fix issue preventing in-cluster app sync due to go-client changes (issue #774)
|
||||
|
||||
|
|
|
|||
16
README.md
16
README.md
|
|
@ -72,5 +72,17 @@ For additional details, see [architecture overview](docs/architecture.md).
|
|||
* Argo CD is being used in production to deploy SaaS services at Intuit
|
||||
|
||||
## Roadmap
|
||||
* Revamped UI, and feature parity with CLI
|
||||
* Customizable application actions
|
||||
### v0.11
|
||||
|
||||
* New application controller architecture
|
||||
* Multi-namespaced applications
|
||||
* Large application support
|
||||
* Resource lifecycle hook improvements
|
||||
* K8s recommended application labels
|
||||
* External OIDC provider support
|
||||
* OIDC group claims bindings to Project Roles
|
||||
* Declarative Argo CD configuration
|
||||
* Helm repository support
|
||||
|
||||
### v0.12
|
||||
* UI improvements
|
||||
|
|
@ -18,6 +18,8 @@
|
|||
* [RBAC](rbac.md)
|
||||
|
||||
## Other
|
||||
* [Best Practices](best_practices.md)
|
||||
* [Configuring Ingress](ingress.md)
|
||||
* [Automation from CI Pipelines](ci_automation.md)
|
||||
* [Custom Tooling](custom_tools.md)
|
||||
* [F.A.Q.](faq.md)
|
||||
|
|
|
|||
53
docs/best_practices.md
Normal file
53
docs/best_practices.md
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
# Best Practices
|
||||
|
||||
## Separating config vs. source code repositories
|
||||
|
||||
Using a separate git repository to hold your kubernetes manifests, keeping the config separate
|
||||
from your application source code, is highly recommended for the following reasons:
|
||||
|
||||
1. It provides a clean separation of application code vs. application config. There will be times
|
||||
when you wish to change over and not other. For example, you likely do _not_ want to trigger
|
||||
a build if you are updating an annotation in a spec.
|
||||
|
||||
2. Cleaner audit log. For auditing purposes, a repo which only holds configuration will have a much
|
||||
cleaner git history of what changes were made, without the noise stemming from check-ins of
|
||||
normal development activity.
|
||||
|
||||
2. Your application may be comprised of services built from multiple git repositories, but is
|
||||
deployed as a single unit. Often times, microservices applications are comprised of services
|
||||
with different versioning schemes, and release cycles (e.g. ELK, Kafka + Zookeeper). It may not
|
||||
make sense to store the manifests in one of the source code repositories of a single component.
|
||||
|
||||
3. Separate repositories enables separation of access. The person who is developing the app, may
|
||||
not necessarily be the same person who can/should affect production environment, either
|
||||
intentionally or unintentionally.
|
||||
|
||||
4. If you are automating your CI pipeline, pushing manifest changes to the same git repository will
|
||||
likely trigger an infinite loop of build jobs and git commit triggers. Pushing config changes to
|
||||
a separate repo prevent this from happening.
|
||||
|
||||
|
||||
## Leaving room for imperativeness
|
||||
|
||||
It may be desired to leave room for some imperativeness/automation, and not have everything defined
|
||||
in your git manifests. For example, if you want the number of your deployment's replicas to be
|
||||
managed by [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/),
|
||||
then you would not want to track `replicas` in git.
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-deployment
|
||||
spec:
|
||||
# do not include replicas in the manifests if you want replicas to be controlled by HPA
|
||||
# replicas: 1
|
||||
template:
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx:1.7.9
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
...
|
||||
```
|
||||
58
docs/ci_automation.md
Normal file
58
docs/ci_automation.md
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
# Automation from CI Pipelines
|
||||
|
||||
Argo CD follows the GitOps model of deployment, where desired configuration changes are first
|
||||
pushed to git, and the cluster state then syncs to the desired state in git. This is a departure
|
||||
from imperative pipelines which do not traditionally use git repositories to hold application
|
||||
config.
|
||||
|
||||
To push new container images into to a cluster managed by Argo CD, the following workflow (or
|
||||
variations), might be used:
|
||||
|
||||
|
||||
1. Build and publish a new container image
|
||||
|
||||
```
|
||||
docker build -t mycompany/guestbook:v2.0 .
|
||||
docker push -t mycompany/guestbook:v2.0 .
|
||||
```
|
||||
|
||||
2. Update the local manifests using your preferred templating tool, and push the changes to git.
|
||||
|
||||
NOTE: the use of a different git repository to hold your kubernetes manifests (separate from
|
||||
your application source code), is highly recommended. See [best practices](best_practices.md)
|
||||
for further rationale.
|
||||
|
||||
```
|
||||
git clone https://github.com/mycompany/guestbook-config.git
|
||||
cd guestbook-config
|
||||
|
||||
# kustomize
|
||||
kustomize edit set imagetag mycompany/guestbook:v2.0
|
||||
|
||||
# ksonnet
|
||||
ks param set guestbook image mycompany/guestbook:v2.0
|
||||
|
||||
# plain yaml
|
||||
kubectl patch --local -f config-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"guestbook","image":"mycompany/guestbook:v2.0"}]}}}}' -o yaml
|
||||
|
||||
git add . -m "Update guestbook to v2.0"
|
||||
git push
|
||||
```
|
||||
|
||||
3. Synchronize the app (Optional)
|
||||
|
||||
For convenience, the argocd CLI can be downloaded directly from the API server. This is
|
||||
useful so that the CLI used in the CI pipeline is always kept in-sync and uses argocd binary
|
||||
that is always compatible with the Argo CD API server.
|
||||
|
||||
```
|
||||
export ARGOCD_SERVER=argocd.mycompany.com
|
||||
export ARGOCD_AUTH_TOKEN=<JWT token generated from project>
|
||||
curl -sSL -o /usr/local/bin/argocd https://${ARGOCD_SERVER}/download/argocd-linux-amd64
|
||||
argocd app sync guestbook
|
||||
argocd app wait guestbook
|
||||
```
|
||||
|
||||
If [automated synchronization](auto_sync.md) is configured for the application, this step is
|
||||
unnecessary. The controller will automatically detect the new config (fast tracked using a
|
||||
[webhook](webhook.md), or polled every 3 minutes), and automatically sync the new manifests.
|
||||
|
|
@ -35,8 +35,7 @@ git push upstream vX.Y.Z
|
|||
```bash
|
||||
git clone https://github.com/argoproj/homebrew-tap
|
||||
cd homebrew-tap
|
||||
shasum -a 256 ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
|
||||
# edit argocd.rb with version and checksum
|
||||
./update.sh ~/go/src/github.com/argoproj/argo-cd/dist/argocd-darwin-amd64
|
||||
git commit -a -m "Update argocd to vX.Y.Z"
|
||||
git push
|
||||
```
|
||||
|
|
|
|||
36
docs/sso.md
36
docs/sso.md
|
|
@ -2,6 +2,17 @@
|
|||
|
||||
## Overview
|
||||
|
||||
There are two ways that SSO can be configured:
|
||||
|
||||
* Bundled Dex OIDC provider - use this option your current provider does not support OIDC (e.g. SAML,
|
||||
LDAP) or if you wish to leverage any of Dex's connector features (e.g. the ability to map GitHub
|
||||
organizations and teams to OIDC groups claims).
|
||||
|
||||
* Existing OIDC provider - use this if you already have an OIDC provider which you are using (e.g.
|
||||
Okta, OneLogin, Auth0, Microsoft), where you manage your users, groups, and memberships.
|
||||
|
||||
## Dex
|
||||
|
||||
Argo CD embeds and bundles [Dex](https://github.com/coreos/dex) as part of its installation, for the
|
||||
purpose of delegating authentication to an external identity provider. Multiple types of identity
|
||||
providers are supported (OIDC, SAML, LDAP, GitHub, etc...). SSO configuration of Argo CD requires
|
||||
|
|
@ -65,15 +76,6 @@ data:
|
|||
clientSecret: $dex.acme.clientSecret
|
||||
orgs:
|
||||
- name: your-github-org
|
||||
|
||||
# OIDC example (e.g. Okta)
|
||||
- type: oidc
|
||||
id: okta
|
||||
name: Okta
|
||||
config:
|
||||
issuer: https://dev-123456.oktapreview.com
|
||||
clientID: aaaabbbbccccddddeee
|
||||
clientSecret: $dex.okta.clientSecret
|
||||
```
|
||||
|
||||
After saving, the changes should take affect automatically.
|
||||
|
|
@ -85,3 +87,19 @@ NOTES:
|
|||
Argo CD will automatically use the correct `redirectURI` for any OAuth2 connectors, to match the
|
||||
correct external callback URL (e.g. https://argocd.example.com/api/dex/callback)
|
||||
|
||||
|
||||
## Existing OIDC provider
|
||||
|
||||
To configure Argo CD to delegate authenticate to your existing OIDC provider, add the OAuth2
|
||||
configuration to the `argocd-cm` ConfigMap under the `oidc.config` key:
|
||||
|
||||
```
|
||||
data:
|
||||
url: https://argocd.example.com
|
||||
|
||||
oidc.config: |
|
||||
name: Okta
|
||||
issuer: https://dev-123456.oktapreview.com
|
||||
clientID: aaaabbbbccccddddeee
|
||||
clientSecret: $oidc.okta.clientSecret
|
||||
```
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ the target environment.
|
|||
|
||||
If a branch name, or a symbolic reference (like HEAD) is specified, Argo CD will continually compare
|
||||
live state against the resource manifests defined at the tip of the specified branch or the
|
||||
deferenced commit of the symbolic reference.
|
||||
dereferenced commit of the symbolic reference.
|
||||
|
||||
To redeploy an application, a user makes changes to the manifests, and commit/pushes those the
|
||||
changes to the tracked branch/symbolic reference, which will then be detected by Argo CD controller.
|
||||
|
|
|
|||
|
|
@ -1,155 +0,0 @@
|
|||
# This example demonstrates a "blue-green" deployment
|
||||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Workflow
|
||||
metadata:
|
||||
generateName: k8s-bluegreen-
|
||||
spec:
|
||||
entrypoint: k8s-bluegreen
|
||||
arguments:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
- name: service-name
|
||||
- name: new-deployment-manifest
|
||||
templates:
|
||||
- name: k8s-bluegreen
|
||||
steps:
|
||||
# 1. Create a parallel Kubernetes deployment with tweaks to name and app name
|
||||
- - name: create-blue-deployment
|
||||
template: clone-deployment
|
||||
arguments:
|
||||
parameters:
|
||||
- name: green-deployment-name
|
||||
value: '{{workflow.parameters.deployment-name}}'
|
||||
- name: suffix
|
||||
value: blue
|
||||
|
||||
# 2. Wait for parallel deployment to become ready
|
||||
- - name: wait-for-blue-deployment
|
||||
template: wait-deployment-ready
|
||||
arguments:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
value: '{{steps.create-blue-deployment.outputs.parameters.blue-deployment-name}}'
|
||||
|
||||
# 3. Patch the named service to point to the parallel deployment app
|
||||
- - name: switch-service-to-blue-deployment
|
||||
template: patch-service
|
||||
arguments:
|
||||
parameters:
|
||||
- name: service-name
|
||||
value: '{{workflow.parameters.service-name}}'
|
||||
- name: app-name
|
||||
value: '{{steps.create-blue-deployment.outputs.parameters.blue-deployment-app-name}}'
|
||||
|
||||
# 4. Update the original deployment (receiving no traffic) with a new version
|
||||
- - name: create-green-deployment
|
||||
template: patch-deployment
|
||||
arguments:
|
||||
parameters:
|
||||
- name: deployment-manifest-data
|
||||
value: '{{workflow.parameters.new-deployment-manifest}}'
|
||||
|
||||
# 5. Wait for the original deployment, now updated, to become ready
|
||||
- - name: wait-for-green-deployment
|
||||
template: wait-deployment-ready
|
||||
arguments:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
value: '{{steps.create-green-deployment.outputs.parameters.green-deployment-name}}'
|
||||
|
||||
# 6. Patch the named service to point to the original, now updated app
|
||||
- - name: switch-service-to-green-deployment
|
||||
template: patch-service
|
||||
arguments:
|
||||
parameters:
|
||||
- name: service-name
|
||||
value: '{{workflow.parameters.service-name}}'
|
||||
- name: app-name
|
||||
value: '{{steps.create-green-deployment.outputs.parameters.green-deployment-app-name}}'
|
||||
|
||||
# 7. Remove the cloned deployment (no longer receiving traffic)
|
||||
- - name: delete-cloned-deployment
|
||||
template: delete-deployment
|
||||
arguments:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
value: '{{steps.create-blue-deployment.outputs.parameters.blue-deployment-name}}'
|
||||
|
||||
# end of steps
|
||||
|
||||
|
||||
|
||||
|
||||
- name: clone-deployment
|
||||
inputs:
|
||||
parameters:
|
||||
- name: green-deployment-name
|
||||
- name: suffix
|
||||
container:
|
||||
image: argoproj/argoexec:latest
|
||||
command: [sh, -c]
|
||||
args: ["
|
||||
kubectl get -o json deployments/{{inputs.parameters.green-deployment-name}} | jq -c '.metadata.name+=\"-{{inputs.parameters.suffix}}\" | (.metadata.labels.app, .spec.selector.matchLabels.app, .spec.template.metadata.labels.app) +=\"-{{inputs.parameters.suffix}}\"' | kubectl create -o json -f - > /tmp/blue-deployment;
|
||||
jq -j .metadata.name /tmp/blue-deployment > /tmp/blue-deployment-name;
|
||||
jq -j .spec.template.metadata.labels.app /tmp/blue-deployment > /tmp/blue-deployment-app-name
|
||||
"]
|
||||
outputs:
|
||||
parameters:
|
||||
- name: blue-deployment-name
|
||||
valueFrom:
|
||||
path: /tmp/blue-deployment-name
|
||||
- name: blue-deployment-app-name
|
||||
valueFrom:
|
||||
path: /tmp/blue-deployment-app-name
|
||||
|
||||
- name: patch-deployment
|
||||
inputs:
|
||||
parameters:
|
||||
- name: deployment-manifest-data
|
||||
container:
|
||||
image: argoproj/argoexec:latest
|
||||
command: [sh, -c]
|
||||
args: ["
|
||||
echo '{{inputs.parameters.deployment-manifest-data}}' | kubectl apply -o json -f - > /tmp/green-deployment;
|
||||
jq -j .metadata.name /tmp/green-deployment > /tmp/green-deployment-name;
|
||||
jq -j .spec.template.metadata.labels.app /tmp/green-deployment > /tmp/green-deployment-app-name
|
||||
"]
|
||||
outputs:
|
||||
parameters:
|
||||
- name: green-deployment-name
|
||||
valueFrom:
|
||||
path: /tmp/green-deployment-name
|
||||
- name: green-deployment-app-name
|
||||
valueFrom:
|
||||
path: /tmp/green-deployment-app-name
|
||||
|
||||
- name: wait-deployment-ready
|
||||
inputs:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
container:
|
||||
image: argoproj/argoexec:latest
|
||||
command: [sh, -c]
|
||||
args: ["kubectl rollout status --watch=true 'deployments/{{inputs.parameters.deployment-name}}'"]
|
||||
|
||||
- name: patch-service
|
||||
inputs:
|
||||
parameters:
|
||||
- name: service-name
|
||||
- name: app-name
|
||||
container:
|
||||
image: argoproj/argoexec:latest
|
||||
command: [sh, -c]
|
||||
args: ["kubectl patch service '{{inputs.parameters.service-name}}' -p '{\"spec\": {\"selector\": {\"app\": \"{{inputs.parameters.app-name}}\"}}}'"]
|
||||
|
||||
- name: delete-deployment
|
||||
inputs:
|
||||
parameters:
|
||||
- name: deployment-name
|
||||
resource:
|
||||
action: delete
|
||||
manifest: |
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {{inputs.parameters.deployment-name}}
|
||||
Loading…
Reference in a new issue