The django application is running in ASGI in production, to have the
same behavior we run the development container with uvicorn too with
options more appropriated for a development evironment.
We want the possibility to configure specific environment variables on
backend and celery deployment. Most of them are common but in the case
of the newly added settings DB_PSYCOPG_POOL_MIN_SIZE we want to
configure ot only on the backend deployment, not on the celery or with a
different value.
This is a naive first switch from sync to async.
This enables the backend to still answer to incomming requests
while streaming LLM results to the user.
For sure there is room for code cleaning and improvements, but
this provides a nice improvement out of the box.
For now, the reconciliation requests are imported through CSV in the
Django admin, which sends confirmation email to both addresses. When
both are checked, the actual reconciliation is processed, and all
user-related content is updated.
## Purpose
Fix#1616 // Replaces #1708
For now, the reconciliation requests are imported through CSV in the
Django admin, which sends confirmation email to both addresses. When
both are checked, the actual reconciliation is processed, and all
user-related content is updated.
## Proposal
- [x] New `UserReconciliationCsvImport` model to manage the import of
reconciliation requests through a task
(`user_reconciliation_csv_import_job`)
- [x] New `UserReconciliation` model to store the user reconciliation
requests themselves (a row = a `active_user`/`inactive_user` pair)
- [x] On save, a confirmation email is sent to the users
- [x] A `process_reconciliation` admin action process the action on the
requested entries, if both emails have been checked.
- [x] Bulk update the `DocumentAccess` items, while managing the case
where both users have access to the document (keeping the higher role)
- [x] Bulk update the `LinkTrace` items, while managing the case where
both users have link traces to the document
- [x] Bulk update the `DocumentFavorite` items, while managing the case
where both users have put the document in their favorites
- [x] Bulk update the comment system items (`Thread`, `Comment` and
`Reaction` items)
- [x] Bulk update the `is_active` status on both users
- [x] New `USER_RECONCILIATION_FORM_URL` env variable for the "make a
new request" URL in an email.
- [x] Write unit tests
- [x] Remove the unused `email_user()` method on `User`, replaced with
`send_email()` similar to the one on the `Document` model
## Demo page reconciliation success
<img width="1149" height="746" alt="image"
src="https://github.com/user-attachments/assets/09ba2b38-7af3-41fa-a64f-ce3c4fd8548d"
/>
---------
Co-authored-by: Anthony LC <anthony.le-courric@mail.numerique.gouv.fr>
Not every project requires silent login.
This commit adds a new feature flag
FRONTEND_SILENT_LOGIN_ENABLED to enable or
disable silent login functionality.
Most of Docs app is configured thanks to environment
variables, except the url in the email that
was from the django site table.
Now we can set it with DJANGO_EMAIL_URL_APP
environment variable to have a better consistency.
We keep the previous way to avoid breaking
changes.
Image and document uploaded were limited to 10MB.
For the conversion service, we allow up to 20MB.
For the dev and feature environment, we have to increase this value
accordingly.
Added Helm templates for docspec deployment and service to enable
document specification conversion in the Kubernetes environment.
Updated Tiltfile, compose.yml, and Helm values to
configure docspec integration alongside the
backend converter service for document import functionality.
The given_name and usual_name is not configured in the oidc scopes. When
a user connect to docs with the dev and feature configuration, we don't
have this informations.
The certifi ca certificate is now stored under a static path
(/cert/cacert.pem) to avoid issues when python is upgraded and the path
to the certificate changes.
Our Helm chart wasn't suitable for use with Helm alone because jobs
remained after deployment. We chose to configure ttlSecondsAfterFinished
to clean up jobs after a period of time.
Fix certificate directory reference that still pointed to Python 3.12 folder
after upgrading to Python 3.13. Resolves certificate verification errors in
tilt stack caused by incorrect certificate location.
We want to change the cache key prefix using an environment variable.
This settings can be changed at every deployment in order to reset to
use a fresh new cache.
In order to load a custom theme file with our helm chart, we allow to
load the content of a file into a config map and then use this configmap
as a volume in the backend deployment
The keycloak configuration used in dev environment is too generic and we
can have a conflict with other project that are using the same ingress
domain. Also the namespace was missing in the keycloak extra ConfigMap
leading to creating it in the default namespace.
We added the `FRONTEND_URL_JSON_FOOTER` environment
variable. It will give the possibility to generate
your own footer content in the frontend.
If the variable is not set, the footer will not
be displayed.
The way that collaboration server authentifies the user
has changed. We adapt the configuration to the new
way of doing it, by removing the nginx auth url,
and by adding COLLABORATION_BACKEND_BASE_URL
setting.
Support for two API keys has been added to the YProvider microservice to
decouple responsibilities between the collaboration server and other
endpoints. This improves security by scoping keys to specific purposes and
ensures a clearer separation of concerns for easier management and debugging.
Abstracted base URL and API key under 'y-provider' for
reuse in future endpoints, aligning with microservice naming.
Please note the YProvider API here is internal to the cluster.
In facts, we don't want these endpoints to be exposed by any ingress
We need to keep the stickyness between the
collaboration api and the ws server, to do so,
we will use "upstream-hash-by: $arg_room", meaning
that the stickyness will be based on the room query.
We need to ahve 2 ingress to handle the
"collaboration_auth", only the ws routes has to
use the "collaboration_auth" subrequest.
When an access is updated or removed, the
collaboration server is notified to reset the
access connection; by being disconnected, the
accesses will automatically reconnect by passing
by the ngnix subrequest, and so get the good
rights.
We do the same system when the document link is
updated, except here we reset every access
connection.
We want to be able to reset the connections of a document.
To do this, we need to be able to send a
request to the collaboration server.
To do so, we added the endpoint
POST "/collaboration/api/reset-connections"
to the collaboration server thanks to "express".
We need to improve security on the access to The collaboration server
We can use the same pattern as for media files leveraging the nginx
subrequest feature.
We want to use the same pattern for the websocket collaboration service
authorization as what we use for media files.
This addition comes in the next commit but doing it efficiently
required factorizing some code with the media auth view.
Logs were not made to the console so it was hard to debug in k8s.
We propose a ready made logging configuration that sends everything
to the console and allow adjusting log levels with environment
variables.
This is a revert of 1da5a removing actual deployments and keeping
only the dev environment in Tilt.
The clean-up was a bit heavy handed. We should keep the Helm
chart to the development repository and move away only the
deployment configuration.
We make use of nginx subrequests to block media file downloads while
we check for access rights. The request is then proxied to the object
storage engine and authorization is added via the "Authorization"
header. This way the media urls are static and can be stored in the
document's json content without compromising on security: access
control is done on all requests based on the user cookie session.
We were using the version of the api from the
.env file, but we could have different versions
of the api in the same app. So we now use the
version from the request.
The ingress was the same for the frontend, the
backend and the websocket, but the websocket
needs to be handled differently, so we created
a new ingress specifically for the websocket.