update spelling across docs (#7752)

This commit is contained in:
Jonathan Brennan 2026-02-27 17:53:55 -06:00 committed by GitHub
parent 82dd36a88b
commit 5858c85f41
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
74 changed files with 152 additions and 152 deletions

View file

@ -168,7 +168,7 @@ That's it!
[playground](https://space-x-land-with-sub.herokuapp.com)
I'm now ready to work on the demo for
[**Svelte Codegen**](https://github.com/ticruz38/graphql-codegen-svelte-apollo)! Stay tunned ;)
[**Svelte Codegen**](https://github.com/ticruz38/graphql-codegen-svelte-apollo)! Stay tuned ;)
[@jycouet](https://twitter.com/jycouet)

View file

@ -75,7 +75,7 @@ We want to make **Azure** and **Bitbucket** a first-class citizens in GraphQL In
the same experience as you get right now with **GitHub**.
**Monitoring** will enable you to analyze the traffic of your GraphQL APIs and provide details
needed to improve performance. Collecting informations about the usage will let you safely remove
needed to improve performance. Collecting information about the usage will let you safely remove
deprecated pieces of GraphQL Schema.
> If you're interested, please reach out to us!

View file

@ -258,7 +258,7 @@ export interface UserDbObject {
```
There is also the need to handle the difference between embedded types and linked types, so you can
use `@entity(embbeded: true)` and `@embbeded` to declare link and embedded entities:
use `@entity(embedded: true)` and `@embedded` to declare link and embedded entities:
```graphql
type User @entity {

View file

@ -578,7 +578,7 @@ autocomplete inside `.graphql` files.
GraphQL Code Generator plugins for frontend frameworks integrations (such as
`typescript-react-apollo` / `typescript-apollo-angular`) are automatically creates an executable
copy (`DocumentNode`) of your GraphQL operations in the generated code file, and it will
automatically include it withing your wrapper call.
automatically include it within your wrapper call.
It will add that to the output file with `Document` suffix, and `FragmentDoc` for fragments.

View file

@ -293,7 +293,7 @@ LIMIT
+ | '8771195f-7a8a-4685-8e4e-ae45d017c11b' | 'Uri' | '2023-01-11T10:32:07.853915+00:00' |
```
The only solution here is to also introduce another, truely unique, column to our cursor. In this
The only solution here is to also introduce another, truly unique, column to our cursor. In this
case we can utilize the `"id"` column.
So lets execute the SQL query, but now with both the `id` (`'628995bf-2907-49d1-a36f-978566a053c4'`)

View file

@ -168,7 +168,7 @@ request-response lifecycle strongly typed.
For instance, there are some amazing projects from [**The Guild**](/) like
[**GraphQL Codegen**](https://the-guild.dev/graphql/codegen) which we can use for generating types
based on our local/remote schema with great Typescript integration, and you have a lot of
plugins/recepies you can use along with it as well.
plugins/recipes you can use along with it as well.
Want to generate Typescript objects based on GQL documents? You can try out
[**Typed Document Node**](https://github.com/dotansimha/graphql-typed-document-node)

View file

@ -75,7 +75,7 @@ If you are in the Kubernetes world, you also have a lot of ingress controllers l
The first thing would be the entry point of all your GraphQL requests. Since GraphQL exposes a
single endpoint e.g. `/graphql` this becomes the single entry point for all your operations.
But, I highly wouldn't recommend directly exposing your service to client since it can be unsecure,
But, I highly wouldn't recommend directly exposing your service to client since it can be insecure,
difficult to manage things like rate-limiting, load balancing and so on.
Rather, it is always recommended to expose it via an API Gateway of your choice. Be it Ambassador,
@ -220,7 +220,7 @@ This applies not just to the GraphQL ecosystem, but anything you choose for that
- Does it have a clear roadmap for the future? If so, what are the milestones?
- What are the other alternatives? How does it compare to them?
- How is the documentation? Does it have tests? Does it have examples which I can refer to?
- Does it follow standards and is free of Vendor Lockin?
- Does it follow standards and is free of Vendor Lock-in?
- Are there any security concerns which this tool or library might create?
While not all of these questions might have been addressed by the library or tool well, what I see

View file

@ -24,7 +24,7 @@ use case and building the development workflow which suits you best in this blog
Technology changes and evolves constantly as we have already seen it happening all these days. So,
rather than worrying too much about the technology you choose, it is better to choose a tool,
library or platform which allows for incremental changes without lockin. Using the list in the
library or platform which allows for incremental changes without lock-in. Using the list in the
[**previous blog post**](/blog/graphql-deep-dive-2) might actually help in your decision-making
process.

View file

@ -52,7 +52,7 @@ time.
The best way to avoid this is to put together a boilerplate with all that you would typically want
to use and this can become a starting point of all the services you may develop in the future. This
would also involve things like setting up your GraphQL Gateway (incase you are using something like
would also involve things like setting up your GraphQL Gateway (in case you are using something like
[Federation](https://the-guild.dev/graphql/hive/federation) or Stitching) since the gateway becomes
the single point of contact for all your requests from the clients.

View file

@ -184,7 +184,7 @@ SSE is HTTP/1. HTTP/1 powered servers are limited to
meaning you can only have 6 concurrently active subscriptions (compared to HTTP/2 powered servers
that are by default limited to 100+ active connection per domain).
The "singe connection mode" creates a single connection to the server which is used exclusively for
The "single connection mode" creates a single connection to the server which is used exclusively for
streaming events and messages. Then you issue separate HTTP requests to execute operations. This
property makes it safe for HTTP/1 environments, but also for subscription heavy apps even in HTTP/2
environments (simply because subscriptions are typically best when really granular and this can

View file

@ -311,7 +311,7 @@ const client = createClient({
Check the repo out to for
[Getting Started](https://github.com/enisdenjo/graphql-sse#getting-started) quickly with some
[Recepies](https://github.com/enisdenjo/graphql-sse#recipes) for vanilla usage, or with
[Recipes](https://github.com/enisdenjo/graphql-sse#recipes) for vanilla usage, or with
[Relay](https://relay.dev) and [Apollo Client](https://apollographql.com/docs/react). Opening
issues, contributing with code or simply improving the documentation is always welcome!

View file

@ -252,7 +252,7 @@ const client = createClient({
## Want to Find Out More?
Check the repo out to for [Getting Started](https://github.com/enisdenjo/graphql-ws#getting-started)
quickly with some [Recepies](https://github.com/enisdenjo/graphql-ws#recipes) for vanilla usage, or
quickly with some [Recipes](https://github.com/enisdenjo/graphql-ws#recipes) for vanilla usage, or
with [Relay](https://relay.dev) and [Apollo Client](https://apollographql.com/docs/react). Opening
issues, contributing with code or simply improving the documentation is always welcome!

View file

@ -16,7 +16,7 @@ That also includes updating GraphQL Yoga to the latest
## NestJS V9
Just recently, NestJS, a framework for building efficient, scalable Node.js server-side
applications, has released a new version v9. Fixes and improvements are encompasing the complete
applications, has released a new version v9. Fixes and improvements are encompassing the complete
framework, be sure to do an update!
Read more about the new release on the

View file

@ -19,7 +19,7 @@ that the ability to scale and evolve your API is an important goal. Therefore, w
new feature in Hive that enables developers to easily lint, verify, and enforce best practices
across the entire GraphQL API.
![](https://the-guild.dev/graphql/hive/_next/static/media/policy-overview.e560205a.png 'Choose a custom policy and enfore best practices')
![](https://the-guild.dev/graphql/hive/_next/static/media/policy-overview.e560205a.png 'Choose a custom policy and enforce best practices')
With a simple dashboard configuration, you can enforce the following:

View file

@ -122,7 +122,7 @@ flowchart LR
```
<Callout type="info" emoji="💡">
There are four positonal arguments of a resolver function:
There are four positional arguments of a resolver function:
- `parent`: the value returned by the parent resolver.
- For root-level resolvers like `Query.movie`, `parent` is always `undefined`.
- For other object-type resolvers like `Movie.id` and `Movie.name`, `parent` is the value returned by parent resolvers like `Query.movie`

View file

@ -358,7 +358,7 @@ export interface CDNToken {
}
/**
* We prefix the token so we can check fast wether a token is a new one or a legacy one.
* We prefix the token so we can check fast whether a token is a new one or a legacy one.
*/
const keyPrefix = 'hv2'

View file

@ -43,7 +43,7 @@ import { VideoEmbed } from '#components/video-embed'
- [**Finds breaking, dangerous and safe changes**](https://graphql-inspector.com/docs/essentials/diff)
when modifying a GraphQL API
- [**GitHub Application**](https://graphql-inspector.com/install) +
[GitHub Action](https://graphql-inspector.com/docs/recipies/github) _(Bitbucket integration soon)_
[GitHub Action](https://graphql-inspector.com/docs/recipes/github) _(Bitbucket integration soon)_
- **Command Line Interface**
- **Completely free and open-source** — host your own GraphQL Inspector
- [**Schema coverage**](https://graphql-inspector.com/docs/essentials/coverage) **—**see unused
@ -87,7 +87,7 @@ Next, you configure GraphQL Inspector in `package.json` :
```
> You can read more about that in
> [“Github Application” chapter](https://graphql-inspector.com/docs/recipies/github#usage) on
> [“Github Application” chapter](https://graphql-inspector.com/docs/recipes/github#usage) on
> [our website](https://graphql-inspector.com).
Now, whenever someone submits a Pull Request the GraphQL Inspector will compare schemas and fail if

View file

@ -112,6 +112,6 @@ contribute!
Everything we wrote above is just the start, and we want to hear from you how we could improve the
GraphQL community as a whole.
We hope that our work would help you acheive that, but we mostly want to hear from you - what are
We hope that our work would help you achieve that, but we mostly want to hear from you - what are
the things that stop you from contributing and influencing the community? Let us know and let's
change that together!

View file

@ -78,7 +78,7 @@ As you can see above, every module imports other modules; and this creates a cir
You might ask if this is the only way to implement modules for these entities; because it looks like
there is no point to have different modules for those schemas. Having circular dependency is the
same situtation with having a single large module.
same situation with having a single large module.
import { VideoEmbed } from '#components/video-embed'

View file

@ -140,7 +140,7 @@ class User extends OfflineFirstWithGraphqlModel {}
final users = await repository.get<User>();
```
These can also be overriden per request but subfields don't need to be declared because they're
These can also be overridden per request but subfields don't need to be declared because they're
generated based on the model:
```dart
@ -170,7 +170,7 @@ StreamBuilder<List<User>>(
)
```
Even if your GraphQL server doesn't utilize susbcriptions, you can still use `subscribe`. Events
Even if your GraphQL server doesn't utilize subscriptions, you can still use `subscribe`. Events
will be triggered every time data is inserted into the local SQLite database.
<Callout type="warning">

View file

@ -24,7 +24,7 @@ the security of GraphQL and which community tools you can leverage today to secu
The inception and evolution of GraphQL are intrinsically tied to the principles of open-source,
ensuring it flourished not just as a query language but as a community-driven initiative progressing
toward broader, unified API development standards. From its public
[specifiction release](https://github.com/graphql/graphql-spec) in 2015 by Facebook to its
[specification release](https://github.com/graphql/graphql-spec) in 2015 by Facebook to its
transition to the [GraphQL Foundation](https://graphql.org/foundation/) in 2018—hosted by the Linux
Foundation—its journey has been transparent and collaborative.
@ -58,7 +58,7 @@ upcoming
[GraphQL Fusion](https://graphql.org/conf/sessions/4a4e842d1cd0c06083f484d31225abd1/?name=GraphQL%20Fusion:%20Rethinking%20Distributed%20GraphQL)
specification.
And last but not least, the GraphQL community is also very community centered, wether via the
And last but not least, the GraphQL community is also very community centered, whether via the
[GraphQL working group](https://graphql.org/community/developers/#working-groups) or the various
events and meetups such as the [GraphQL Conf](https://graphql.org/conf/).
@ -98,7 +98,7 @@ Moving forward, we'll dive into how this perfect pairing of open source and cybe
crucial, not just relevant, for GraphQL and API security. We'll highlight practical tools and
strategies you can use today to protect your applications.
## Open Source GraphQL Security Ressources & Tools
## Open Source GraphQL Security Resources & Tools
There are many GraphQL open-source tools available to help developers and businesses defend against
possible cybersecurity threats. From defensive measures that shield sensitive data to offensive

View file

@ -75,7 +75,7 @@ export function graphql(source: string) {
```
Every time you use the `graphql` function, all the operations in the project are imported,
regardless of wether they are used or not. This can be a problem if you have a lot of documents
regardless of whether they are used or not. This can be a problem if you have a lot of documents
(query, mutation, and fragments) in your project.
<Callout>

View file

@ -162,7 +162,7 @@ Clients can now consume the public API fields by pointing to the gateway, while
remains private.
As a additional security measure you should leverage
[persisted documents to avoid execution of arbitary GraphQL operations](https://the-guild.dev/graphql/hive/docs/gateway/persisted-documents)
[persisted documents to avoid execution of arbitrary GraphQL operations](https://the-guild.dev/graphql/hive/docs/gateway/persisted-documents)
against the private schema.
For more guidance on choosing a gateway for your project, refer to the

View file

@ -139,7 +139,7 @@ const pubSub = createPubSub(new EventEmitter())
A (type-safe) PubSub implementation in 21 lines of code. We will use this for the example below.
In my opinion the different pub sub implementations should rather be based on EventEmitter instead
of `graphql-subscriptions`. A PubSub can but musn't be used together with GraphQL. By choosing the
of `graphql-subscriptions`. A PubSub can but mustn't be used together with GraphQL. By choosing the
name `graphql-subscriptions` it gives the impression that the logic is specific to GraphQL and
reduces other possible contributions from people that need a similar event abstraction.
@ -386,7 +386,7 @@ const PostChangedSubscription = graphql`
const PostRenderer = ({ postId }: { postId: string }) => {
const { props } = useQuery(PostQuery, /* variables */ { postId })
// thats all we gotta do
// that's all we gotta do
useSubscription(PostChangedSubscription, /* variables */ { postId })
return <Post {...props} />
@ -673,7 +673,7 @@ const PostChangedSubscription = graphql`
const PostRenderer = ({ postId }: { postId: string }) => {
const { props } = useQuery(PostQuery, /* variables */ { postId })
// thats all we gotta do
// that's all we gotta do
useSubscription(PostChangedSubscription, /* variables */ { postId })
return <Post {...props} />

View file

@ -375,7 +375,7 @@ const myPlugin: Plugin = {
console.log('before parsing')
return function afterParse({ result }) {
if (result instanceof Error) {
console.log('Error occured during parsing: ', result)
console.log('Error occurred during parsing: ', result)
}
}
},
@ -385,7 +385,7 @@ const myPlugin: Plugin = {
return function afterValidate({ result }) {
if (result.length) {
console.log('Errors occured duting validation: ', result)
console.log('Errors occurred during validation: ', result)
}
}
},

View file

@ -266,7 +266,7 @@ config is used for these cases, we don't have to manually convert the ID to a `s
sending it to the server.
<Callout type="info">
Starting from `typescript` and `typescript-operations` plugin v4, all depedent client plugins
Starting from `typescript` and `typescript-operations` plugin v4, all dependent client plugins
(e.g. Apollo Client, React Request, urql, etc.) must update to use the new input/output format. If
you find issues in these client plugins, create an issue in the [community
repo](https://github.com/dotansimha/graphql-code-generator-community).
@ -353,7 +353,7 @@ would have to create custom mappers or manually convert the number to string to
typecheck.
<Callout type="info">
The recommeded setup is suitable for the most common use cases. It does not cover some edge cases that may make writing resolvers awkward.
The recommended setup is suitable for the most common use cases. It does not cover some edge cases that may make writing resolvers awkward.
For example, `Int`'s output is technically `string | number | boolean` but it is typed as `number`
by default because most resolvers would never return `string` or `boolean`.

View file

@ -46,11 +46,11 @@ there's no one way which is the most right of doing so, but we chose a way that
for us and that we truly believe in when it comes to building apps. We've connected it all with
TypeORM, GraphQL-Code-Generator, GraphQL-Modules for the following reasons:
- The GraphQL back-end was implemented using **GraphQL-Modules** where logic was splitted into
feature based modules. GraphQL-Modules is a library which provides you with the ability to manage
and maintain your GraphQL schema in a scalable and reusable way. Not once nor twice I have seen
people who struggle with that and get tangled upon their own creation, and with GraphQL-Modules
where you have a very defined structure, this problem can be easily solved. You can read more in
- The GraphQL back-end was implemented using **GraphQL-Modules** where logic was split into feature
based modules. GraphQL-Modules is a library which provides you with the ability to manage and
maintain your GraphQL schema in a scalable and reusable way. Not once nor twice I have seen people
who struggle with that and get tangled upon their own creation, and with GraphQL-Modules where you
have a very defined structure, this problem can be easily solved. You can read more in
[this series of 7 blog posts about it](/blog/graphql-modules).
- Every GraphQL/TypeScript definition was automatically generated with GraphQL-Code-Generator using
a single command call. There's no need to maintain the same thing twice if it already exists in

View file

@ -56,11 +56,11 @@ that we truly believe in when it comes to building apps. We've connected it all
[GraphQL-Modules](https://graphql-modules.com) for the following reasons:
- The GraphQL back-end was implemented using [GraphQL-Modules](https://graphql-modules.com) where
logic was splitted into feature based modules. GraphQL-Modules is a library which provides you
with the ability to manage and maintain your GraphQL schema in a scalable and reusable way. Not
once nor twice I have seen people who struggle with that and get tangled upon their own creation,
and with GraphQL-Modules where you have a very defined structure, this problem can be easily
solved. You can read more in [this series of 7 blog posts about it](/blog/graphql-modules).
logic was split into feature based modules. GraphQL-Modules is a library which provides you with
the ability to manage and maintain your GraphQL schema in a scalable and reusable way. Not once
nor twice I have seen people who struggle with that and get tangled upon their own creation, and
with GraphQL-Modules where you have a very defined structure, this problem can be easily solved.
You can read more in [this series of 7 blog posts about it](/blog/graphql-modules).
- Every GraphQL/TypeScript definition was automatically generated with
[GraphQL-Code-Generator](https://graphql-code-generator.com) using a single command call. There's
no need to maintain the same thing twice if it already exists in one way or another. This way you

View file

@ -48,7 +48,7 @@ following migration story.
1. Some of your members were assigned by the system to the `Admin` or `Viewer` roles, but only if
their permissions matched the role's (e.g. a member with full access was assigned to the `Admin`
role).
2. Members that our system could't assign to any of the system roles, don't have any role assigned.
2. Members that our system couldn't assign to any of the system roles, don't have any role assigned.
3. Pending invitations are still available, the invited members will be assigned to `Viewer` role
after accepting the invitation.

View file

@ -61,7 +61,7 @@ This command will fetch subgraph's schema from the provided URL, replace the ori
subgraph from the Registry with the local one, and compose a supergraph. The outcome will be saved
in the `supergraph.graphql` file.
The `products` subgraph will stay untoched, meaing that the gateway will route requests to its
The `products` subgraph will stay untouched, meaning that the gateway will route requests to its
remote endpoint.
> The `--watch` flag will keep the process running and update the supergraph whenever the local

View file

@ -41,7 +41,7 @@ The idea of persisted documents is quite simple, however maintaining them can be
especially if you have many apps with different versions that are all active at the same time.
All the GraphQL API replicas must have access to the persisted documents, to ensure the requested
operation hashes can be resolved to actualy GraphQL documents.
operation hashes can be resolved to actual GraphQL documents.
Embedding them within the GraphQL API codebase is not a good idea, as it would require deploying a
new version of the API every time a new app version is about to be deployed.

View file

@ -38,7 +38,7 @@ Start by adding the Hive plugin dependency to your `Cargo.toml`:
hive-apollo-router-plugin = "..."
```
And then intergrate the plugin in your codebase:
And then integrate the plugin in your codebase:
```rs
// import the registry instance and the plugin registration function

View file

@ -11,7 +11,7 @@ custom workflows.
You can now integrate directly with our platform using our Public GraphQL API. This means you can:
- Build custom workflows on top of our data and perform actions programatically without the CLI
- Build custom workflows on top of our data and perform actions programmatically without the CLI
- Manage users and access tokens
- Get usage information about schema coordinates
- Access new capabilities as we expand the GraphQL schema over time

View file

@ -25,7 +25,7 @@ Router by almost 2x - all while maintaining 100% compatibility with the Federati
Audit. This means your federated GraphQL architecture stays standards-compliant and interoperable,
but now with Rust's unmatched speed under the hood.
The following results benchmark a scenario runing 4 subgraphs and a GraphQL gateway with Federation
The following results benchmark a scenario running 4 subgraphs and a GraphQL gateway with Federation
spec, and runs a heavy query. It's being executed with a constant amount of VUs over a fixed amount
of time.
@ -50,7 +50,7 @@ performance under the hood with JavaScript's flexibility on top.
<Callout type="warning">
While the Rust Query Planner delivers a lot, it currently doesn't support a handfull of Hive Gateway
While the Rust Query Planner delivers a lot, it currently doesn't support a handful of Hive Gateway
features. We're hard at work to achieve full compatibility soon! Check out the
[feature comparison](/docs/gateway/other-features/rust-query-planner#compared-to-javascript-query-planner)
for details.

View file

@ -128,10 +128,10 @@ npm i -D @graphql-hive/cli@0.50.1
### Git Integration
If you are running `hive` command line in a directory that has a Git repository configured (`.git`),
the CLI willl automatically extract the values for the author and commit for the certain commands
the CLI will automatically extract the values for the author and commit for the certain commands
(e.g. schema publish and schema check.
You may override these values by excplicitly passing the `--author` and `--commit` flags to the CLI.
You may override these values by explicitly passing the `--author` and `--commit` flags to the CLI.
If your project does not have a Git repository configured with a user name and email, you are
required to pass the `--author` and `--commit` flags to the CLI.
@ -285,7 +285,7 @@ hive schema:publish \
token](/docs/management/access-tokens).
</Callout>
Checking a GraphQL schema is the form of checking the compatbility of an upcoming schema, compared
Checking a GraphQL schema is the form of checking the compatibility of an upcoming schema, compared
to the latest published version.
This process of checking a schema needs to be done before **publishing** a new schema version. This
@ -509,7 +509,7 @@ This command will fetch subgraph's schema from the provided URL, replace the ori
subgraph from the Registry with the local one, and compose a supergraph. The outcome will be saved
in the `supergraph.graphql` file.
The `products` subgraph will stay untoched, meaing that the gateway will route requests to its
The `products` subgraph will stay untouched, meaning that the gateway will route requests to its
remote endpoint.
> The `--watch` flag will keep the process running and update the supergraph whenever the local

View file

@ -439,7 +439,7 @@ useHive({
})
```
Furthermore, you can also allow arbitraty documents based on the incoming HTTP request.
Furthermore, you can also allow arbitrary documents based on the incoming HTTP request.
```typescript filename="Hive Persisted Documents Allow Arbitrary Documents Based on Request" {8-9}
useHive({

View file

@ -7,7 +7,7 @@ import { Callout } from '@theguild/components'
# Hive Gateway Configuration Reference
An overview of all the configuration options for the Hive Gateway, both CLI and
[Progammatic Usage](/docs/gateway/deployment#programmatic-usage).
[Programmatic Usage](/docs/gateway/deployment#programmatic-usage).
## Default CLI Config Files
@ -198,8 +198,8 @@ export const gatewayConfig = defineConfig({
})
```
If a subgraph mathes multiple transport entries, the configurations are merged with priority for the
most specific matcher (`subgraphName` > `*.transportKind` > `*`).
If a subgraph matches multiple transport entries, the configurations are merged with priority for
the most specific matcher (`subgraphName` > `*.transportKind` > `*`).
### `kind`
@ -238,8 +238,8 @@ the specific documentation of each transport.
### `options.subscriptions`
A special case of the `option` fields is `subcriptions`. It allows to override the transport options
(including the kind) only for subscripions. Please see
A special case of the `option` fields is `subscriptions`. It allows to override the transport
options (including the kind) only for subscripions. Please see
[Subscriptions page](/docs/gateway/subscriptions) for more details.
## Subgraphs

View file

@ -7,8 +7,8 @@ import { Callout, Tabs } from '@theguild/components'
# Using the GraphQL API
Interact programatically interact with your Hive Organization via the public Hive Console GraphQL
API. You can manage users, projects, and targets, and retrieve information about your schema usage.
Interact programmatically with your Hive Organization via the public Hive Console GraphQL API. You
can manage users, projects, and targets, and retrieve information about your schema usage.
The GraphQL API can be interacted with from any language that supports sending HTTP requests.
@ -169,11 +169,11 @@ inserted).
The GraphQL API models expected errors as part of the GraphQL schema. The type of a mutation field
is always a object type with both a `ok` and `error` property. We recommend always selecting both
fields within the mutation selection sets. In case expected mutation behaviour suceeds, the `ok`
property is non-null and contains the mutation result, while the `error` peroperty is `null`. In
case the mutation fails (e.g. due to a input validation error), the `ok` field is `null` and the
`error` field is non-null containing an error message and often additional information on the source
of the error.
fields within the mutation selection sets. In case expected mutation behaviour succeeds, the `ok`
property is non-null and contains the mutation result, while the `error` property is `null`. In case
the mutation fails (e.g. due to a input validation error), the `ok` field is `null` and the `error`
field is non-null containing an error message and often additional information on the source of the
error.
```graphql filename="Example: Create project mutation"
mutation CreateProjectMutation($input: CreateProjectInput!) {

View file

@ -25,7 +25,7 @@ query ProjectBySlug($organizationSlug: String!, $projectSlug: String!) {
## Retrieve a list of Projects within an Organization
Use the `Organization.projects` field for retriving a list of projects within the organziation.
Use the `Organization.projects` field for retrieving a list of projects within the organization.
**Note:** This field will currently return all projects within the organization.
```graphql filename="Example: Retrieve a List of Projects within an Organization"

View file

@ -211,7 +211,7 @@ curl -X POST \
| `400` | Errors while processing the sent JSON body. |
| `401` | Invalid `X-Usage-API-Version` header provided. |
| `429` | Rate limited due to exceeding usage reporting quota. |
| `500` | An unexpected error occured. |
| `500` | An unexpected error occurred. |
The endpoint will return a JSON body response body for `200` and `400` status codes.

View file

@ -4,7 +4,7 @@ import { Callout, Tabs } from '@theguild/components'
Hive Gateway supports Authentication and Authorization using JSON Web Tokens (JWT).
A [JSON Web Tokens (JWT)](https://jwt.io/) is a signed token containing arbitrary informations,
A [JSON Web Tokens (JWT)](https://jwt.io/) is a signed token containing arbitrary information,
commonly used for authentication. By being signed by the issuer of the token, it can be verified
that the token is valid and has not been tampered with.
@ -37,7 +37,7 @@ gateway, and no other entity can execute requests to the subgraph on behalf of t
## How to use?
Here's a mininal example for configuring the JWT plugin with a local signing key, and looking for
Here's a minimal example for configuring the JWT plugin with a local signing key, and looking for
the token in the `authorization` header:
```ts filename="gateway.config.ts"

View file

@ -281,7 +281,7 @@ docker run -p 4000:4000 \
### Additional Resolvers
Instead maybe you need to define additional resolvers that depend on other dependencies. Similarily
Instead maybe you need to define additional resolvers that depend on other dependencies. Similarly
to the [Develop Plugin](#develop-plugin) approach, you can just copy the project code over and build
another image.

View file

@ -33,7 +33,7 @@ separate package `@graphql-hive/gateway-runtime`.
npm i @graphql-hive/gateway-runtime
```
Improving the experince of Hive Gateway in various JavaScript environments (especially
Improving the experience of Hive Gateway in various JavaScript environments (especially
[serverless/on-the-edge](/docs/gateway/deployment/serverless)), the `@graphql-hive/gateway-runtime`
comes in a slimmer version set up for optimized runtime and smaller size. This leads to a difference
in the configuration options compared to the CLI version `@graphql-hive/gateway`.
@ -49,7 +49,7 @@ details about these differences.
## Other Environments (Not Listed)
Let's say you have an environment that is not listed here, you can still deploy your Hive Gateway
using its progammatic API. In this case, we will show here how to pass the request information from
using its programmatic API. In this case, we will show here how to pass the request information from
your environment to Hive Gateway, then get the response for your environment back.
```ts

View file

@ -89,7 +89,7 @@ const app = express()
const serveRuntime = createGatewayRuntime(/* Your configuration */)
const hiveGWRouter = express.Router()
// GraphiQL specefic CSP configuration
// GraphiQL specific CSP configuration
hiveGWRouter.use(
helmet({
contentSecurityPolicy: {

View file

@ -49,7 +49,7 @@ export default /* GraphQL */ `
<Callout>
The example above does not support streaming responses because Hive Gateway is being disposed
immediatelly after assembling a response with `ctx.waitUntil(gateway[Symbol.asyncDispose]())`.
immediately after assembling a response with `ctx.waitUntil(gateway[Symbol.asyncDispose]())`.
</Callout>
<Callout type="info">

View file

@ -7,7 +7,7 @@ serverless environment like AWS Lambda, Cloudflare Workers, or Azure Functions.
<Callout>
Please read carefully following sections, most importantly [Bundling
Problems](#bundling-problems). Serverless and Edge are very specific environment that are comming
Problems](#bundling-problems). Serverless and Edge are very specific environment that are coming
with very specific requirements.
</Callout>
@ -40,7 +40,7 @@ loading those modules dynamically. This means that the bundler can't know static
transport packages should be included in the bundle.
When running in a bundled environment like Serevless and Edge, you need to statically configure the
transports needed to comunicate with your upstream services. This way, the transport modules are
transports needed to communicate with your upstream services. This way, the transport modules are
statically referenced and will be included into the bundle.
```ts filename="index.ts"

View file

@ -28,7 +28,7 @@ making it easier to understand and use.
#### Context
The context object passed to plugins and hooks will always have the relevant logger instance
provided throug the `log` property. Same goes for all of the transports' contexts. Each of the
provided through the `log` property. Same goes for all of the transports' contexts. Each of the
transport contexts now has a `log` prop.
##### Plugin Setup Function
@ -171,8 +171,8 @@ Lets write a plugin that toggles the `debug` log level when a secure HTTP reques
<Callout>
Please be very carefuly with securing your logger. Changing the log level from an HTTP request can
be a security risk and should be avoided in production environments. **Use this feature with caution
Please be very careful with securing your logger. Changing the log level from an HTTP request can be
a security risk and should be avoided in production environments. **Use this feature with caution
and proper security measures.**
</Callout>
@ -321,7 +321,7 @@ export class DailyFileLogWriter implements LogWriter {
}
```
And using it as simple as pluging it into an instance of Hive Logger to the `logging` option:
And using it as simple as plugging it into an instance of Hive Logger to the `logging` option:
```ts filename="gateway.config.ts"
import { defineConfig, JSONLogWriter, Logger } from '@graphql-hive/gateway'
@ -351,7 +351,7 @@ logs.
npm i pino pino-pretty
```
Since we're using a custom log writter, you have to install the Hive Logger package too:
Since we're using a custom log writer, you have to install the Hive Logger package too:
```sh npm2yarn
npm i @graphql-hive/logger
@ -387,7 +387,7 @@ Logger's logs.
npm i winston
```
Since we're using a custom log writter, you have to install the Hive Logger package too:
Since we're using a custom log writer, you have to install the Hive Logger package too:
```sh npm2yarn
npm i @graphql-hive/logger

View file

@ -678,7 +678,7 @@ This hook is mostly used for monitoring and tracing purposes.
| `supergraph` | The GraphQL schema of the supergraph. |
| `subgraph` | The name of the subgraph. |
| `sourceSubschema` | The schema of the subgraph. |
| `typeName` | The name of the type being planed. |
| `typeName` | The name of the type being planned. |
| `variables` | The variables provided in the client request. |
| `fragments` | The fragments provided in the client request. |
| `fieldNodes` | The field nodes of selection set being planned. |
@ -943,7 +943,7 @@ const useMyTracer = () =>
#### Instrumentation composition
If multiple plugins have `instrumentation`, they are composed in the same order they are defined the
plugin array (the first is outtermost call, the last is inner most call).
plugin array (the first is outermost call, the last is inner most call).
It is possible to customize this composition if it doesn't suite your need (ie. you need hooks and
instrumentation to have a different oreder of execution).
@ -957,7 +957,7 @@ const { instrumentation: instrumentation2, ...plugin2 } = usePlugin2()
const instrumentation = composeInstrumentation([instrumentation2, instrumentation1])
const getEnveloped = envelop({
plugin: [{ insturments }, plugin1, plugin2]
plugin: [{ instruments }, plugin1, plugin2]
})
```
@ -965,7 +965,7 @@ const getEnveloped = envelop({
Wraps the HTTP request handling. This includes all the plugins `onRequest` and `onResponse` hooks.
This instrument can be asynchronous, the wrapped funcion **can be** asynchronous. Be sure to return
This instrument can be asynchronous, the wrapped function **can be** asynchronous. Be sure to return
a `Promise` if `wrapped()` returned a `Promise`.
#### `requestParse`
@ -973,16 +973,16 @@ a `Promise` if `wrapped()` returned a `Promise`.
Wraps the parsing of the request phase to extract grapqhl params. This include all the plugins
`onRequestParse` hooks.
This insturment can be asynchronous, the wrapped function **can be** asynchrounous. Be sure to
return a `Promise` if `wrapped()` returns a `Promise`.
This instrument can be asynchronous, the wrapped function **can be** asynchronous. Be sure to return
a `Promise` if `wrapped()` returns a `Promise`.
#### `operation`
Wraps the Graphql operation execution pipeline. This is called for each graphql operation, meaning
it can be called mutliple time for the same HTTP request if batching is enabled.
it can be called multiple time for the same HTTP request if batching is enabled.
This instrument can be asynchronous, the wrapped function **can be** asynchronous. Be sur to return
a `Promise` if `wrapped()` returnd a `Promise`.
a `Promise` if `wrapped()` returned a `Promise`.
#### `init`
@ -1024,10 +1024,10 @@ Note that `wrapped` is not guaranteed to return a promise.
#### `subscribe`
Wraps the subscribe phase. This includes all the plugins `onSubscribe` hooks. Note that it doesn't
wrap the entire lifetime of the subscription, but only it's intialisation.
wrap the entire lifetime of the subscription, but only it's initialisation.
This instrument can be asynchronous, the wrapped function **can be** asynchronous. Be sure to
`await` or use `.then` on the result of the `wrapped` function to run code after the `subsribe`
`await` or use `.then` on the result of the `wrapped` function to run code after the `subscribe`
phase.
Note that `wrapped` is not guaranteed to return a promise.

View file

@ -25,7 +25,7 @@ The Automatic Persisted Queries plugin follows
Operations](/docs/gateway/persisted-documents).
Furthermore, an potential DDOS attacker could spam your GraphQL API with persisted operation
registrations, thus completly disable the advantages you would get from APQ and, furthermore, even
registrations, thus completely disable the advantages you would get from APQ and, furthermore, even
decrease the performance of your GraphQL API.
</Callout>

View file

@ -59,7 +59,7 @@ Total time:0.007571%
## Configuration
The behaviour of this plugin can be configured by passing an object at the gateway level or by using
`@cacheControl` directive at schema defintion level.
`@cacheControl` directive at schema definition level.
The `@cacheControl` directive can be used to give to subgraphs the control over the cache behavior
for the fields and types they are defining. You can add this directive during composition.
@ -283,7 +283,7 @@ an user id within the encoded access token.
Don't forget to validate the authentication token before using it as a session key.
Allowing cached responses to be returned with unverified tokens can lead to data leaks.
Please see the [Authorization/Auhtentication](/docs/gateway/authorization-authentication) section
Please see the [Authorization/Authentication](/docs/gateway/authorization-authentication) section
for more information.
</Callout>
@ -307,7 +307,7 @@ the `scope` to indicate that the cache should only be used if a session is prese
This can be useful to prevent exposure of sensitive data to unauthorized users.
<Tabs items={[
"Programatically using options",
"Programmatically using options",
<span>Declaratively using <Code>@cacheControl</Code></span>
]}>
<Tabs.Tab>
@ -370,7 +370,7 @@ If a query operation result contains multiple objects of the same or different t
is picked.
<Tabs items={[
"Programatically using options",
"Programmatically using options",
<span>Declaratively using <Code>@cacheControl</Code></span>
]}>
<Tabs.Tab>
@ -409,7 +409,7 @@ is picked.
By default, all successful operations influences the cache.
You can globaly disable caching using the `enabled` option. This can be useful for local
You can globally disable caching using the `enabled` option. This can be useful for local
development.
```ts filename="Disabling caching"
@ -421,7 +421,7 @@ defineConfig({
})
```
### Ingore a specific request
### Ignore a specific request
You can entirely disable caching (both caching and invalidation) for a specific request by using the
`enabled` option.
@ -448,10 +448,10 @@ the response from being cached, but will not prevent cache invalidation for othe
in the response.
<Tabs items={[
"Programatically using options",
"Programmatically using options",
<span>Declaratively using <Code>@cacheControl</Code></span>
]}>
<Tabs.Tab title="Programatically using options">
<Tabs.Tab title="Programmatically using options">
```ts filename="Disabling caching for a specific type"
defineConfig({
responseCaching: {

View file

@ -11,7 +11,7 @@ GraphQL request. This can be useful when you want to save resources on your serv
There is also
[Execution Cancellation](/docs/gateway/other-features/performance/execution-cancellation) that stops
the execution, but it doesn't stop ongoing HTTP requests. This seperately allows you to stop the
the execution, but it doesn't stop ongoing HTTP requests. This separately allows you to stop the
HTTP requests by hooking into [`fetch`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).
## Enable Upstream Cancellation

View file

@ -41,7 +41,7 @@ NODE_EXTRA_CA_CERTS=/path/to/ca.crt hive-gateway supergraph <path-to-supergraph-
#### Configuration File
The only way to configure HTTPS programmaticaly is to use a custom agent like below;
The only way to configure HTTPS programmatically is to use a custom agent like below;
```ts
import { readFileSync } from 'fs'

View file

@ -29,8 +29,8 @@ define a list of persisted operations that are allowed to execute.
## Limit Max Tokens
Parsing a GraphQL operation document is a very expensive and compute intensitive operation that
blocks the JavaScript event loop. If an attacker sends a very complex operation document with slight
Parsing a GraphQL operation document is a very expensive and compute intensive operation that blocks
the JavaScript event loop. If an attacker sends a very complex operation document with slight
variations over and over again he can easily degrade the performance of the GraphQL server. Because
of the variations simply having an LRU cache for parsed operation documents is not enough.

View file

@ -89,7 +89,7 @@ export const gatewayConfig = defineConfig({
### Enabling Arbitrary Documents
After enabling persisted documents on your Hive Gateway, any arbitary GraphQL documents that don't
After enabling persisted documents on your Hive Gateway, any arbitrary GraphQL documents that don't
contain a `documentId` will be rejected. If you still want to allow executing arbitrary documents,
you can set `allowArbitraryDocuments` to `true` in the configuration.
@ -158,7 +158,7 @@ send the full operation document.
If you now sent a normal GraphQL operation that is not within the store, it will be rejected.
```bash filename="Arbitary GraphQL operation"
```bash filename="Arbitrary GraphQL operation"
curl -X POST -H 'Content-Type: application/json' http://localhost:4000/graphql \
-d '{"query": "{__typename}"}'

View file

@ -569,7 +569,7 @@ subscription {
}
```
Hive Gateway will inteligently resolve all fields on subscription events and deliver you the
Hive Gateway will intelligently resolve all fields on subscription events and deliver you the
complete result.
<Tabs items={["SSE", "WebSocket"]}>
@ -888,7 +888,7 @@ export const gatewayConfig = defineConfig({
We're now ready to subscribe to the `newProduct` subscription field and publish events to the
`new_product` topic. The publishing of events can happen from **anywhere**, it doesn't have to be
from within Hive Gateway or any perticular subgraph, you can, for example, implement a separate
from within Hive Gateway or any particular subgraph, you can, for example, implement a separate
service that is only responsible for emitting subscription events.
You can subscribe to the `newProduct` subscription from a client using any of the
@ -961,7 +961,7 @@ export interface PubSub<M extends TopicDataMap = TopicDataMap> {
publish<Topic extends keyof M>(topic: Topic, data: M[Topic]): MaybePromise<void>
/**
* A distinct list of all topics that are currently subscribed to.
* Can be a promise to accomodate distributed systems where subscribers exist on other
* Can be a promise to accommodate distributed systems where subscribers exist on other
* locations and we need to know about all of them.
*/
subscribedTopics(): MaybePromise<Iterable<keyof M>>

View file

@ -29,7 +29,7 @@ import { Logger } from '@graphql-hive/logger'
const log = new Logger()
log.debug('I wont be logged by default')
log.debug("I won't be logged by default")
log.info({ some: 'attributes' }, 'Hello %s!', 'world')
@ -224,12 +224,12 @@ import { Logger } from '@graphql-hive/logger'
const log = new Logger({ level: 'debug' })
log.trace(
// you can suply "lazy" attributes which wont be evaluated unless the log level allows logging
// you can supply "lazy" attributes which won't be evaluated unless the log level allows logging
() => ({
wont: 'be evaluated',
some: expensiveOperation()
}),
'Wont be logged and attributes wont be evaluated'
"Won't be logged and attributes won't be evaluated"
)
log.debug('Hello world!')
@ -315,13 +315,13 @@ const log = new Logger({
}
})
log.debug('isDebug is false, so this wont be logged')
log.debug("isDebug is false, so this won't be logged")
log.info('Hello world!')
const child = log.child('[scoped] ')
child.debug('Child loggers inherit the parent log level function, so this wont be logged either')
child.debug("Child loggers inherit the parent log level function, so this won't be logged either")
// enable debug mode
isDebug = true
@ -664,7 +664,7 @@ class HTTPLogWriter implements LogWriter {
}
const log = new Logger({
// send logs both to the HTTP loggging service and output them to the console
// send logs both to the HTTP logging service and output them to the console
writers: [new HTTPLogWriter(), new ConsoleLogWriter()]
})
@ -735,7 +735,7 @@ class HTTPLogWriter implements LogWriter {
{
await using log = new Logger({
// send logs both to the HTTP loggging service and output them to the console
// send logs both to the HTTP logging service and output them to the console
writers: [new HTTPLogWriter(), new ConsoleLogWriter()]
})

View file

@ -333,7 +333,7 @@ the log level to "warn".
#### Logging in JSON Format
Previously, the Hive Gateway used two different environment variables to trigger loggin in JSON
Previously, the Hive Gateway used two different environment variables to trigger logging in JSON
format:
- `LOG_FORMAT=json`
@ -553,7 +553,7 @@ logs.
npm i pino pino-pretty
```
Since we're using a custom log writter, you have to install the Hive Logger package too:
Since we're using a custom log writer, you have to install the Hive Logger package too:
```sh npm2yarn
npm i @graphql-hive/logger
@ -591,7 +591,7 @@ Logger's logs.
npm i winston
```
Since we're using a custom log writter, you have to install the Hive Logger package too:
Since we're using a custom log writer, you have to install the Hive Logger package too:
```sh npm2yarn
npm i @graphql-hive/logger
@ -621,7 +621,7 @@ export const gatewayConfig = defineConfig({
## OpenTelemetry
OpenTelemetry integration have been re-worked to offer better traces, custom attributes and spans,
and overall compatiblity with standard OTEL API.
and overall compatibility with standard OTEL API.
For this features to be possible, we had to break the configuration API of the old plugin and
release a brand new one `@graphql-**hive**/plugin-opentelemetry`. Everyone that has used the plugin
@ -658,7 +658,7 @@ hive-gateway supergraph supergraph.graphql \
### SDK Setup
The OpenTelemetry SDK setup used to be automatically done by the plugin it self, it is no longer the
case. You have the choice to either setup it yourself using official `@opentelemetry/*` pacakges
case. You have the choice to either setup it yourself using official `@opentelemetry/*` packages
(like official Node SDK `@opentelemetry/sdk-node`), or to use our cross-plateform setup helper
(recommended).
@ -698,8 +698,8 @@ Due to internal changes, the information available at span filtering time has be
include (depending on the span) the GraphQL `context`, the HTTP `request` and the Upstream
`executionRequest`.
Please refere to [Request Spans documentation](/docs/gateway/monitoring-tracing#request-spans) for
details of what is availbe for each span filter.
Please refer to [Request Spans documentation](/docs/gateway/monitoring-tracing#request-spans) for
details of what is available for each span filter.
### Span parenting

View file

@ -256,7 +256,7 @@ Upgrade your docker image to at least
[`router2.0.0-plugin2.1.0`](https://github.com/graphql-hive/console/pkgs/container/apollo-router/370721851?tag=router2.0.0-plugin2.1.0)
or update the plugin to
[`hive-apollo-router-plugin@2.1.0`](https://github.com/graphql-hive/console/releases/tag/hive-apollo-router-plugin%402.1.0)
if you are compling the router from source.
if you are compiling the router from source.
Adjust your `router.yml` file accordingly.
@ -269,7 +269,7 @@ plugins:
target: 'my-org/my-project/my-target'
```
Altenatively, the target can also be specified via the `HIVE_TARGET_ID` environment variable.
Alternatively, the target can also be specified via the `HIVE_TARGET_ID` environment variable.
**Further Reading:**

View file

@ -58,7 +58,7 @@ will be used for authenticating the supergraph polling from the CDN.
curl -fsSL https://graphql-hive.com/apollo-router-download.sh | bash
```
To download a specfic version of the router, use the `-v` or `--version` flag:
To download a specific version of the router, use the `-v` or `--version` flag:
```bash
curl -fsSL https://graphql-hive.com/apollo-router-download.sh | bash -s -- --version router1.57.1-plugin1.0.0

View file

@ -72,7 +72,7 @@ main().catch(err => {
### Usage Reporting
Based on the server runtime you choosed, you can enable the usage reporting activate the Hive plugin
Based on the server runtime you chose, you can enable the usage reporting activate the Hive plugin
for the server you are running.
## Additional Resources

View file

@ -102,7 +102,7 @@ Within the `expression`, you have access to the following context:
override_labels:
# This label will be active for 5% of the traffic
use-fulfillment-service:
expession: 'random_float(0.0, 100.0) < 5.0'
expression: 'random_float(0.0, 100.0) < 5.0'
activate-beta-feature:
expression: '.request.headers."x-user-group" == "beta"'

View file

@ -22,7 +22,7 @@ Making better decisions about:
- Which operations are being used by what app client version (mobile app, web app, etc.)
- Which team or developer is responsible for a specific operation and needs to be contacted for
Schema change decissions
Schema change decisions
```mermaid
flowchart LR
@ -825,10 +825,10 @@ function fetchQuery(operation, variables) {
</Tabs>
## Continous Deployment (CD) Integration
## Continuous Deployment (CD) Integration
We recommend integrating the app deployment creation and publishing into your CD pipeline for
automating the creationg and publishing of app deployments.
automating the creating and publishing of app deployments.
Usually the following steps are performed.

View file

@ -312,7 +312,7 @@ Hive will expect your HTTP service to respond with `200` status code and the fol
- `content-type: application/json`
The reponse payload should match the following type:
The response payload should match the following type:
```typescript
type CompositionResult = CompositionSuccess | CompositionFailure

View file

@ -1202,7 +1202,7 @@ to the registry.
Usually, you would run these checks in your subgraphs CI pipeline, to ensure that your subgraph
schema integrates flawlessly with the other subgraphs in the federation project, where schema
publishes are made within the Continious Deployment (CD) pipeline to actually publish the latest
publishes are made within the Continuous Deployment (CD) pipeline to actually publish the latest
subgraph version to the schema registry.
### Next Steps

View file

@ -199,7 +199,7 @@ location < Header: https://6d5bc18cd8d13babe7ed321adba3d8ae.r2.cloudflarestorage
Found.
```
In case the request was successfull (correct authorization header was provided and the artifact
In case the request was successful (correct authorization header was provided and the artifact
exists). The CDN will respond with status code `302`. You can now access the artifact via the
provided URL in the `location` header. The link is valid for 60 seconds.

View file

@ -157,7 +157,7 @@ For additional reading:
### Check a schema
Checking a GraphQL schema is the form of checking the compatbility of an upcoming schema, compared
Checking a GraphQL schema is the form of checking the compatibility of an upcoming schema, compared
to the latest published version.
This process of checking a schema needs to be done before **publishing** a new schema version. This

View file

@ -10,7 +10,7 @@ Hive provides two predefined member roles for convenience. Alternatively, organi
members with specific permissions can create and/or assign custom roles tailored to your teams
workflows.
When assinging a role to an user, the permissionsgranted by that role can be restricted to a set of
When assigning a role to a user, the permissionsgranted by that role can be restricted to a set of
resources (projects, targets, or services). For example, this allows to do the following.
- Fully or partially access to projects to a group of users
@ -227,7 +227,7 @@ import assignRole02Image from '../../../../public/docs/pages/management/members-
className="mt-8 max-w-lg rounded-lg drop-shadow-md"
/>
Confirm the selecton by via the `Assign role to user` button.
Confirm the selection via the `Assign role to user` button.
#### Restrict Resources Access
@ -237,7 +237,7 @@ project, or only approve failed schema checks of a set of services.
When assigning a role to an organization member, you can specify on which resources the permissions
granted by the member role should apply.
The grants follow the hierachical order of resources within Hive. Permissions on the organization
The grants follow the hierarchical order of resources within Hive. Permissions on the organization
level are always granted, these can only be restricted by the permissions defined in the member
role.

View file

@ -115,8 +115,8 @@ subscription at any time.
If you wish to upgrade your plan, you can choose between the **Pro** and **Enterprise** plans.
For the **Pro** plan, you can use a credit-card and pay monthly for the resevered quota that works
for your needs. You can modify your plan and resevered quota any time during the month.
For the **Pro** plan, you can use a credit-card and pay monthly for the reserved quota that works
for your needs. You can modify your plan and reserved quota any time during the month.
<ContactTextLink>
For the **Enterprise** plan, please reach our to us, and we'll be happy to help you with your

View file

@ -83,7 +83,7 @@ Settings tab of your project:
<Callout type="warning">
Changing the slug of your project will also change the URL, and will invalidate any existing links
to your project as well as CLI commands using the `--target` parameter. If you want more resilence
to your project as well as CLI commands using the `--target` parameter. If you want more resilience
we recommend using an UUID instead of a target slug for the `--target` parameter.
</Callout>

View file

@ -310,7 +310,7 @@ And now you'd have to fill in a couple of inputs to connect your OpenID Azure AD
className="mt-8 max-w-xl rounded-lg drop-shadow-md"
/>
Now, reate a new client secret for your app by navigating as the screenshot illustrates then on the
Now, create a new client secret for your app by navigating as the screenshot illustrates then on the
right panel add a description for your secret and configure your expiration configuration or leave
it as the default recommended value which is `6 months`, then click **Add**:

View file

@ -176,7 +176,7 @@ to set some environment variables.
You can find all the available versions on the [GitHub Releases page prefixed with `hive@`](https://github.com/graphql-hive/console/releases).
We recommend sticking to a specific version to avoid breaking changes. The `latest` version
correspons to the latest stable release.
corresponds to the latest stable release.
```sh
export DOCKER_TAG=":9.4.1"
@ -290,7 +290,7 @@ type Query {
#### Setting up a local project
You also need to have a git repository initilaized for your project, because Hive automatically uses
You also need to have a git repository initialized for your project, because Hive automatically uses
the last commit hash to determine who the author is and ties the schema with the commit hash. So
we're going to do that.

View file

@ -3,7 +3,7 @@ import { Callout } from '@theguild/components'
# OIDC Login
By default Hive allows you to login using a email and password. However, you can also enable OIDC
(often refered to as social login). This allows you to login using a third party provider such as
(often referred to as social login). This allows you to login using a third party provider such as
GitHub and Google.
<Callout>

View file

@ -41,7 +41,7 @@ The following information is included in usage reporting:
1. **Operation Name**: the name of the operation, if it exists.
2. **Operation Type**: the type of the operation (`query`, `mutation`, or `subscription`).
3. **Coordinated**: a simple array-based strcuture that represents what fields, arguments, and types
3. **Coordinated**: a simple array-based structure that represents what fields, arguments, and types
were used in the operation (for example: `Query.user`, `Query.user.id` `User`).
4. **Client Identifier**: the identifier of the client that sent the operation. This is useful to
understand the distribution of your consumers.

View file

@ -44,7 +44,7 @@ flowchart LR
## Familiar Interface and Transition
For those fimiliar with using Rover CLI, transitioning to Hive is remarkably straightforward. The
For those familiar with using Rover CLI, transitioning to Hive is remarkably straightforward. The
[Hive CLI](/docs/api-reference/cli) has a **similar interface, minimizing the learning curve for
existing users**. This aspect simplifies the migration process from Apollo GraphOS to Hive, allowing
developers to quickly adapt and continue their work without interruption.