The GraphQL API can be accessed by appending /support-graphiql after the first set of alphanumeric characters in your solution URL.
Reference
GraphQL is a standard for communicating between a client and a server via HTTP, with goals similar to REST. Unlike REST, however, GraphQL exposes a rich language clients can use to describe the data they’re fetching. The structure of the data exposed by GraphQL is captured in a schema. This reference documents Shibumi’s GraphQL schema; it assumes readers are already familiar with GraphQL concepts in general. If you are not yet familiar with GraphQL, the official introduction does a great job of explaining it: Introduction to GraphQL.
Introduction To GraphQL
GraphQL is a standard for communicating between a client and a server via HTTP, with goals similar to REST. Unlike REST, however, Graphql exposes a rich language clients can use to describe the data they’re fetching. The structure of the data exposed by GraphQL is captured in a schema. This reference documents Shibumi’s GraphQL schema; it assumes readers are already familiar with GraphQL concepts in general. If you are not yet familiar with GraphQL, the official introduction does a great job of explaining it: Introduction to GraphQL.
Authenticating with GraphQL
Every api call in Shibumi requires an authenticated user. There are two ways of authenticating, appropriate for different situations.
Interactive Session
GraphQL calls can be made using the same session token the Shibumi UI uses to authenticate its own requests. The Explorer, in particular, uses this method when making api calls. This method is convenient when exploring the API, or when testing calls interactively for use in scripts. It should not be used to authenticate production calls.
Programmatic Authentication
In production, scripts should make use of our programmatic authentication flow. This involves a few steps:
- Request a “client id” and “client secret” from your Shibumi contact. These should be stored in your script.
- Create a user in your Shibumi enterprise to represent your script. Scripts should typically have their own Shibumi user, rather than using credentials belonging to a human. As a separate user, they can be given access to a restricted set of instances, and their status won’t change if the employee’s status changes.
- In your script, issue a post request to the following endpoint (or the appropriate endpoint for your environment):
https://app.shibumi.com/api/oauth2/token?grant_type=password
. Include the following fields in the post body:client_id
,client_secret
,username
,password
. The response will look like the following:{ "accessToken": "some_token" }
- When making GraphQL calls, set the
Authorization
header toBearer some_token
, where “some_token” is the token returned from the authentication call.
Access tokens are valid for one hour.
Communicating with GraphQL
For testing queries and getting familiar with the API, Shibumi provides an interactive GraphQL Explorer. For programmatic queries, Shibumi exposes an endpoint that accepts either GET
or POST
requests. Mutations should use the POST
verb, while simple queries can use either depending on the query size, or whether they require variables. Mutations execute in a single transaction; if any part of a mutation request fails, the entire request is rolled back.
The Shibumi GraphQL API has a single endpoint that contains the enterprise-id
https://{environment}.shibumi.com/api/4.0/enterprise/{enterprise-id}/GraphQL/graphQL
Example Calls
GET https://{environment}.shibumi.com/api/4.0/enterprise/{enterprise-id}/GraphQL/graphQL?query={app(apiName:%22App_1__app%22){workItem(type:%22Workstream__t%22%20id:%22101%22){name}}}
Headers:
Authorization: Bearer {auth token}
POST https://{environment}.shibumi.com/api/4.0/enterprise/{enterprise-id}/GraphQL/graphQL
Headers:
Authorization: Bearer {auth token}
Content-Type: application/json
Body:
{
"query": "query($app: String!, $ID: ID!, $type: String!){ app(apiName: $app){ workItem(type: $type id:$ID){ name }}}",
"variables": {
"app": "App_1__app",
"type": "Workstream__t",
"ID": "101"
}
}
JavaScript Example
var request = require("request");
var options = { method: 'POST',
url: 'https://{environment}.shibumi.com/api/4.0/enterprise/{enterprise-id}/GraphQL/graphQL',
headers:
{ Accept: '*/*',
Authorization: 'Bearer {token}',
'Content-Type': 'application/json'
},
body: { query: 'query { app(apiName: "App_1__app"){ workItem(type: "Workstream__t" id: "101") { name } } }' },
};
request(options, function (error, response, body) {
if (error) throw new Error(error);
console.log(body);
});
Pagination
Many queries in Shibumi operate on potentially very large datasets. It is not unusual, for example, for an unfiltered descendants list to return tens or even hundreds of thousands of items. In an effort to protect both our server and the client process from failing due to memory limits, many queries in Shibumi are paged. With paging, the client fetches a few rows at a time (typically up to 100). If it requires more data, it can issue a second request to fetch the next 100 rows, and so on, until it has retrieved all the data necessary.
In the Shibumi API, a paged query is called a “Connection”. Paged fields typically have their type listed as “Connection to XYZ” in the documentation (for example, see here). Connection fields have a few standard arguments they accept, and have a standard response structure.
Response Structure
A paginated response has the following fields:
nodes ([<paginated data type>]!)
This represents the current page of data; each item in the list is a single item on the current page.
pageInfo (PageInfo!)
This holds information about the current page. In particular, if there is a next page, it contains an endCursor: a string identifying the next page to fetch.
Standard Arguments
Each connection field supports these arguments:
Argument | Type | Description |
---|---|---|
first | Int! |
The number of items to fetch in a single page. |
after | String |
Identifies which page to fetch. |
Each connection field requires a first
argument, which identifies the page size. Each connection field has a different maximum limit on the value for the first
parameter. The limit is typically 100, but is sometimes higher. Making the parameter required forces queries to be more explicit, and allows Shibumi to raise the limit in the future without risk to our current users.
The after
argument is optional. It will not be set when fetching the first page of data. To fetch subsequent pages, re-issue an identical query, but pass the endCursor
from the prior response in via the after
argument. This will fetch the following page of data.
Example
For example, imagine looking for logins for the last month. For this example, we’ll use a page size of 2 to keep the data size reasonable.
First, we issue the query for the first page:
{
logins(earliest: "2019-01-01T00:00:00Z",
latest: "2019-02-01T00:00:00Z",
first: 2) {
nodes {
username
timestamp
}
pageInfo {
hasNextPage
endCursor
}
}
}
This yields the following result:
{
"data": {
"logins": {
"nodes": [
{
"timestamp": "2019-01-31T20:32:02.515Z",
"username": "[email protected]"
},
{
"timestamp": "2019-01-31T20:31:01.326Z",
"username": "[email protected]"
}
],
"pageInfo": {
"hasNextPage": true,
"endCursor": "XYZXYZXYZ"
}
}
}
}
Here, we have the first two items in our result set. The pageInfo
field shows us that we have a following page, and gives us the cursor to fetch it. Now, we issue the same query, except with the after
argument set to the endCursor
:
{
logins(earliest: "2019-01-01T00:00:00Z",
latest: "2019-02-01T00:00:00Z",
first: 2,
after: "XYZXYZXYZ") {
nodes {
username
timestamp
}
pageInfo {
hasNextPage
endCursor
}
}
}
This yields the following result:
{
"data": {
"logins": {
"nodes": [
{
"timestamp": "2019-01-31T17:57:34.382",
"username": "[email protected]"
},
{
"timestamp": "2019-01-31T14:48:20.939",
"username": "[email protected]"
}
],
"pageInfo": {
"hasNextPage": true,
"endCursor": "ABCABCABC"
}
}
}
}
This result contains the second page of data, as well as the cursor we need to fetch the third page.
Guarantees
In a live system like Shibumi, paging data can be tricky. Users can add, remove, or modify data in a way that affects page boundaries. Shibumi makes the following guarantee regarding its cursors:
When paging through a full result set, if an item has not been modified since paging started, it will appear exactly once in the results.
Shibumi does not make any guarantees about the format of its cursors; they can change at any time. Shibumi also makes no guarantees about how long its cursors are valid for. Cursors are typically valid for a long time, but the intention is that they will be used and quickly discarded, not stored long-term.
GraphQL Resource Limitations
GraphQL poses a load challege for both servers and clients. In a typical client-server model, requests and responses have a (more or less) fixed maximum size. This makes handling load relatively easy. Approaches like rate limiting of requests can work well, because each request generates a known amount of load.
With GraphQL, the same does not hold true: a single query can fetch arbitrary data volumes of arbitrary complexity. This makes the query language very powerful, but also poses a challenge: how does the system respond gracefully to load when there’s no upper limit on a request size?
In the Shibumi API, we deal with this by essentially adding a maximum request size to the query. We have a few techniques we use to achieve this.
Pagination
First, any query that can potentially return an unbounded number of results is paginated. Each paginated query has a limit on the maximum page size that can be requested. This is typically 100 items, although sometimes it is higher. Pagination protects both the server and the client, since either is susceptible to memory issues caused by a massive result set. Paging results helps to alleviate those memory issues.
Complexity limits
Pagination helps manage load, but there are still cases where queries can become large. For example:
- A query that requests multiple paged result sets.
- A query that requests a nested paged result set.
To handle these cases, we’ve introduced an overall complexity limit. The best way to explain this is with an example.
Imagine a Program template, with children of type Work Stream. Work Stream items in turn have children of type Initiative. Imagine this query:
{
invitations(first: 10) {
nodes {
invitedAt
}
}
workItems {
Program__t(id: 1) {
name
descendants {
Work_Stream__t(first: 10) {
nodes {
name
descendants {
nodes {
Initiative__t(first: 10) {
name
}
}
}
}
}
}
}
}
}
This query fetches data about invitations, as well as nested descendant information for the top-level program. Here is how the complexity score is computed:
{ # Complexity: 10 + 111 = 121
invitations(first: 10) { # Complexity: 10
nodes {
invitedAt
}
}
workItems {
Program__t(id: 1) { # Complexity: 1 + 110 = 111
name
descendants {
Work_Stream__t(first: 10) { # Complexity: 10 + (10 * 10) = 110
nodes {
name
descendants {
Initiative__t(first: 10) { # Complexity: 10
nodes {
name
}
}
}
}
}
}
}
}
}
The overall complexity of the query is 121. Basically, to compute the complexity:
- Assign a “base complexity score” to each field that contributes to the complexity. For most fields this is 1; for paginated fields it is equal to the page size.
- For each field, compute its final complexity. This is equal to
base complexity score + (base complexity score * sum(child complexity))
.- Sum the top-level complexity scores to arrive at the final complexity for the query.
Shibumi currently limits the total complexity to 1100. This may be increased in the future; it will never be lowered. Any query found to be too complex will fail with an appropriate error message.