Download OpenAPI specification:
APIs to manage your Spark jobs and clusters
The combined size of an uploaded set of text files, binary files, or secrets cannot exceed 10MB and each individual file or secret cannot exceed 1MB.
Start a new Spark Connect driver
applicationName | string The application name. If not provided a name will be generated. |
object Any Spark configuration/properties to set arguments | |
jars | Array of strings Any jars to pass in the |
pythonFiles | Array of strings Any python files to pass in the |
files | Array of strings Any files to pass in the |
archives | Array of strings Any archives to pass in the |
object Any environment variables to set | |
resourcePool | string Optional - the resource pool to use (you must have permission to use it) |
secretUploads | Array of strings Optional - secret uploads Secrets will be set as environment variables in the Spark driver and executors. |
Array of objects Optional - file uploads (read only) | |
Array of objects Optional - inline file uploads. See Uploads for more details and limits. (read only) | |
options | Array of strings Items Value: "EncryptCommunication" Details:
|
{- "applicationName": "string",
- "sparkProperties": {
- "property1": "string",
- "property2": "string"
}, - "jars": [
- "string"
], - "pythonFiles": [
- "string"
], - "files": [
- "string"
], - "archives": [
- "string"
], - "environmentVariables": {
- "property1": "string",
- "property2": "string"
}, - "resourcePool": "string",
- "secretUploads": [
- "string"
], - "fileUploads": [
- {
- "uploadId": "string",
- "mountPath": "string"
}
], - "inlineFileUploads": [
- {
- "comment": "string",
- "fileName": "string",
- "uploadType": "Text",
- "data": "string",
- "mountPath": "string"
}
], - "options": [
- "EncryptCommunication"
]
}
{- "sparkId": "string",
- "serverSparkVersion": "string"
}
Submit and run a batch job
applicationResource required | string The application resource to run - must be on an accessible object store |
mainClass | string The main class of the batch job |
cronSchedule | string Optional CRON schedule. If provided, runs the job on the given schedule. See Wikipedia's CRON article for details on CRON schedules. |
ttlSecondsAfterFinished | integer <int32> Optional. Normally, the Spark driver remains after the job completes.
|
applicationArguments | Array of strings Any application arguments |
applicationName | string The application name. If not provided a name will be generated. |
object Any Spark configuration/properties to set arguments | |
jars | Array of strings Any jars to pass in the |
pythonFiles | Array of strings Any python files to pass in the |
files | Array of strings Any files to pass in the |
archives | Array of strings Any archives to pass in the |
object Any environment variables to set | |
resourcePool | string Optional - the resource pool to use (you must have permission to use it) |
secretUploads | Array of strings Optional - secret uploads Secrets will be set as environment variables in the Spark driver and executors. |
Array of objects Optional - file uploads (read only) | |
Array of objects Optional - inline file uploads. See Uploads for more details and limits. (read only) | |
options | Array of strings Items Value: "EncryptCommunication" Details:
|
{- "applicationResource": "string",
- "mainClass": "string",
- "cronSchedule": "string",
- "ttlSecondsAfterFinished": 0,
- "applicationArguments": [
- "string"
], - "applicationName": "string",
- "sparkProperties": {
- "property1": "string",
- "property2": "string"
}, - "jars": [
- "string"
], - "pythonFiles": [
- "string"
], - "files": [
- "string"
], - "archives": [
- "string"
], - "environmentVariables": {
- "property1": "string",
- "property2": "string"
}, - "resourcePool": "string",
- "secretUploads": [
- "string"
], - "fileUploads": [
- {
- "uploadId": "string",
- "mountPath": "string"
}
], - "inlineFileUploads": [
- {
- "comment": "string",
- "fileName": "string",
- "uploadType": "Text",
- "data": "string",
- "mountPath": "string"
}
], - "options": [
- "EncryptCommunication"
]
}
{- "sparkId": "string",
- "serverSparkVersion": "string"
}
Get the log output of a batch job or a connect driver
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Create pre-signed URLs for the given bucket, key and credentials
accessKey required | string The proxy AccessKey provided by your administrator |
secretKey required | string The proxy SecretKey provided by your administrator |
region required | string The S3 region of the bucket |
bucket required | string The bucket for creating the pre-signed URLs |
key required | string The key for creating the pre-signed URLs |
{- "accessKey": "string",
- "secretKey": "string",
- "region": "string",
- "bucket": "string",
- "key": "string"
}
{- "presignedUrls": {
- "property1": "string",
- "property2": "string"
}
}
Get the log output of a batch job or a connect driver.
logsId
is the log index to return: Some cluster types have multiple nodes/workers. Pass 0
to get the main logs and then increase the index to get other logs.
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
logsId required | string Logs from a Spark instance |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Get the status of a batch job or a connect driver
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
{- "status": {
- "property1": "string",
- "property2": "string"
}
}
Shutdown and remove a Spark instance
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Add the given user to the list of users allowed to access the given instance. You must be an admin or owner of the instance to perform this operation.
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
userId required | string A user |
{- "resourceName": "string",
- "description": "string"
}
Remove the given user from the list of users allowed to access the given instance. You must be an admin or owner of the instance to perform this operation.
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
userId required | string A user |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Get all the logs of a batch job or a connect driver (driver and any executors) as a single Zip file
sparkId required | string A spark instance (batch job, connect driver, or cluster) |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Return the current resource pool set including the total available memory and cores
{- "totalMemory": "string",
- "totalCores": 0,
- "resourcePools": [
- {
- "resourcePoolId": "string",
- "priority": 0,
- "maxApplications": 0,
- "minMemory": "string",
- "minCores": 0,
- "maxMemory": "string",
- "maxCores": 0,
- "defaultMemoryPerJob": "string",
- "defaultCoresPerJob": 0,
- "defaultExecutorsPerJob": 0,
- "maxMemoryPerJob": "string",
- "maxCoresPerJob": 0,
- "maxExecutorsPerJob": 0
}
]
}
Update the set of available resource pools
required | Array of objects (ResourcePools) The set of resource pools |
{- "resourcePools": [
- {
- "resourcePoolId": "string",
- "priority": 0,
- "maxApplications": 0,
- "minMemory": "string",
- "minCores": 0,
- "maxMemory": "string",
- "maxCores": 0,
- "defaultMemoryPerJob": "string",
- "defaultCoresPerJob": 0,
- "defaultExecutorsPerJob": 0,
- "maxMemoryPerJob": "string",
- "maxCoresPerJob": 0,
- "maxExecutorsPerJob": 0
}
]
}
{- "resourceName": "string",
- "description": "string"
}
Replace the resource pool user assignments
resourcePoolId required | string The resource pool name |
userIds required | Array of strings Set of users with this role |
[- {
- "resourcePoolId": "string",
- "userIds": [
- "string"
]
}
]
{- "resourceName": "string",
- "description": "string"
}
Replace the entire set of user to role assignments
roleId required | string Role name |
userIds required | Array of strings Set of users with this role |
[- {
- "roleId": "string",
- "userIds": [
- "string"
]
}
]
{- "resourceName": "string",
- "description": "string"
}
Create a new secret upload
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-binary secrets. Data must be Base64 encoded. When the uploaded secret is used in a Spark Connect, batch job, etc. this map of secrets/values are set as environment variables. Thus, the secret name must be valid environment variable identifier. See Uploads for more details and limits. |
{- "comment": "string",
- "secrets": {
- "property1": "string",
- "property2": "string"
}
}
{- "uploadId": "string",
- "comment": "string",
- "secretNames": [
- "string"
]
}
Create a new file upload
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-text files/data |
required | object Map of name-to-binary files/data. Data must be Base64 encoded. |
{- "comment": "string",
- "textData": {
- "property1": "string",
- "property2": "string"
}, - "binaryData": {
- "property1": "string",
- "property2": "string"
}
}
{- "uploadId": "string",
- "comment": "string",
- "textNames": [
- "string"
], - "binaryNames": [
- "string"
]
}
Get a secret upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
{- "comment": "string",
- "secretNames": [
- "string"
]
}
Update a secret upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-binary secrets. Data must be Base64 encoded. When the uploaded secret is used in a Spark Connect, batch job, etc. this map of secrets/values are set as environment variables. Thus, the secret name must be valid environment variable identifier. See Uploads for more details and limits. |
{- "comment": "string",
- "secrets": {
- "property1": "string",
- "property2": "string"
}
}
{- "resourceName": "string",
- "description": "string"
}
Delete a secret upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
Get a file upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
{- "comment": "string",
- "textData": {
- "property1": "string",
- "property2": "string"
}, - "binaryData": {
- "property1": "string",
- "property2": "string"
}
}
Update a file upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-text files/data |
required | object Map of name-to-binary files/data. Data must be Base64 encoded. |
{- "comment": "string",
- "textData": {
- "property1": "string",
- "property2": "string"
}, - "binaryData": {
- "property1": "string",
- "property2": "string"
}
}
{- "resourceName": "string",
- "description": "string"
}
Delete a file upload
uploadId required | string A text or binary file. See Uploads for more details and limits. |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
comment required | string Comment or description. Used only for your own reference purposes. |
secretNames required | Array of strings Secret names |
{- "comment": "string",
- "secretNames": [
- "string"
]
}
applicationResource required | string The application resource to run - must be on an accessible object store |
mainClass | string The main class of the batch job |
cronSchedule | string Optional CRON schedule. If provided, runs the job on the given schedule. See Wikipedia's CRON article for details on CRON schedules. |
ttlSecondsAfterFinished | integer <int32> Optional. Normally, the Spark driver remains after the job completes.
|
applicationArguments | Array of strings Any application arguments |
applicationName | string The application name. If not provided a name will be generated. |
object Any Spark configuration/properties to set arguments | |
jars | Array of strings Any jars to pass in the |
pythonFiles | Array of strings Any python files to pass in the |
files | Array of strings Any files to pass in the |
archives | Array of strings Any archives to pass in the |
object Any environment variables to set | |
resourcePool | string Optional - the resource pool to use (you must have permission to use it) |
secretUploads | Array of strings Optional - secret uploads Secrets will be set as environment variables in the Spark driver and executors. |
Array of objects Optional - file uploads (read only) | |
Array of objects Optional - inline file uploads. See Uploads for more details and limits. (read only) | |
options | Array of strings Items Value: "EncryptCommunication" Details:
|
{- "applicationResource": "string",
- "mainClass": "string",
- "cronSchedule": "string",
- "ttlSecondsAfterFinished": 0,
- "applicationArguments": [
- "string"
], - "applicationName": "string",
- "sparkProperties": {
- "property1": "string",
- "property2": "string"
}, - "jars": [
- "string"
], - "pythonFiles": [
- "string"
], - "files": [
- "string"
], - "archives": [
- "string"
], - "environmentVariables": {
- "property1": "string",
- "property2": "string"
}, - "resourcePool": "string",
- "secretUploads": [
- "string"
], - "fileUploads": [
- {
- "uploadId": "string",
- "mountPath": "string"
}
], - "inlineFileUploads": [
- {
- "comment": "string",
- "fileName": "string",
- "uploadType": "Text",
- "data": "string",
- "mountPath": "string"
}
], - "options": [
- "EncryptCommunication"
]
}
required | object The pre-signed URLs. The key is an HTTP verb (GET, PUT, POST, DELETE). The value is the pre-signed URL |
{- "presignedUrls": {
- "property1": "string",
- "property2": "string"
}
}
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-binary secrets. Data must be Base64 encoded. When the uploaded secret is used in a Spark Connect, batch job, etc. this map of secrets/values are set as environment variables. Thus, the secret name must be valid environment variable identifier. See Uploads for more details and limits. |
{- "comment": "string",
- "secrets": {
- "property1": "string",
- "property2": "string"
}
}
required | Array of objects (ResourcePools) The set of resource pools |
{- "resourcePools": [
- {
- "resourcePoolId": "string",
- "priority": 0,
- "maxApplications": 0,
- "minMemory": "string",
- "minCores": 0,
- "maxMemory": "string",
- "maxCores": 0,
- "defaultMemoryPerJob": "string",
- "defaultCoresPerJob": 0,
- "defaultExecutorsPerJob": 0,
- "maxMemoryPerJob": "string",
- "maxCoresPerJob": 0,
- "maxExecutorsPerJob": 0
}
]
}
comment required | string Comment or description. Used only for your own reference purposes. |
required | object Map of name-to-text files/data |
required | object Map of name-to-binary files/data. Data must be Base64 encoded. |
{- "comment": "string",
- "textData": {
- "property1": "string",
- "property2": "string"
}, - "binaryData": {
- "property1": "string",
- "property2": "string"
}
}
applicationName | string The application name. If not provided a name will be generated. |
object Any Spark configuration/properties to set arguments | |
jars | Array of strings Any jars to pass in the |
pythonFiles | Array of strings Any python files to pass in the |
files | Array of strings Any files to pass in the |
archives | Array of strings Any archives to pass in the |
object Any environment variables to set | |
resourcePool | string Optional - the resource pool to use (you must have permission to use it) |
secretUploads | Array of strings Optional - secret uploads Secrets will be set as environment variables in the Spark driver and executors. |
Array of objects Optional - file uploads (read only) | |
Array of objects Optional - inline file uploads. See Uploads for more details and limits. (read only) | |
options | Array of strings Items Value: "EncryptCommunication" Details:
|
{- "applicationName": "string",
- "sparkProperties": {
- "property1": "string",
- "property2": "string"
}, - "jars": [
- "string"
], - "pythonFiles": [
- "string"
], - "files": [
- "string"
], - "archives": [
- "string"
], - "environmentVariables": {
- "property1": "string",
- "property2": "string"
}, - "resourcePool": "string",
- "secretUploads": [
- "string"
], - "fileUploads": [
- {
- "uploadId": "string",
- "mountPath": "string"
}
], - "inlineFileUploads": [
- {
- "comment": "string",
- "fileName": "string",
- "uploadType": "Text",
- "data": "string",
- "mountPath": "string"
}
], - "options": [
- "EncryptCommunication"
]
}
accessKey required | string The proxy AccessKey provided by your administrator |
secretKey required | string The proxy SecretKey provided by your administrator |
region required | string The S3 region of the bucket |
bucket required | string The bucket for creating the pre-signed URLs |
key required | string The key for creating the pre-signed URLs |
{- "accessKey": "string",
- "secretKey": "string",
- "region": "string",
- "bucket": "string",
- "key": "string"
}
roleId required | string Role name |
userIds required | Array of strings Set of users with this role |
{- "roleId": "string",
- "userIds": [
- "string"
]
}
sparkId required | string The instance Id |
serverSparkVersion required | string The Spark version used |
{- "sparkId": "string",
- "serverSparkVersion": "string"
}
resourcePoolId required | string The name of this resource pool (must be unique) |
priority | integer <int32> The priority of this pool. Pools with larger/higher priority numbers have priority over pools with smaller/lower priority numbers. If not specified, the priority is "0". |
maxApplications | integer <int32> Maximum active applications for this pool |
minMemory required | string Minimum memory (as a Spark quantity string) |
minCores required | integer <int32> Minimum virtual cores |
maxMemory required | string Maximum memory (as a Spark quantity string) |
maxCores required | integer <int32> Maximum virtual cores |
defaultMemoryPerJob required | string Default memory (as a Spark quantity string) per job submitted to the resource pool |
defaultCoresPerJob required | integer <int32> Default virtual cores per job submitted to the resource pool |
defaultExecutorsPerJob required | integer <int32> Default executors per job submitted to the resource pool |
maxMemoryPerJob required | string Maximum memory (as a Spark quantity string) per job submitted to the resource pool |
maxCoresPerJob required | integer <int32> Maximum virtual cores per job submitted to the resource pool |
maxExecutorsPerJob required | integer <int32> Maximum executors per job submitted to the resource pool |
{- "resourcePoolId": "string",
- "priority": 0,
- "maxApplications": 0,
- "minMemory": "string",
- "minCores": 0,
- "maxMemory": "string",
- "maxCores": 0,
- "defaultMemoryPerJob": "string",
- "defaultCoresPerJob": 0,
- "defaultExecutorsPerJob": 0,
- "maxMemoryPerJob": "string",
- "maxCoresPerJob": 0,
- "maxExecutorsPerJob": 0
}
uploadId required | string The ID of this upload |
comment required | string Comment or description. Used only for your own reference purposes. |
textNames required | Array of strings Names of text data in the upload |
binaryNames required | Array of strings Names of binary data in the upload |
{- "uploadId": "string",
- "comment": "string",
- "textNames": [
- "string"
], - "binaryNames": [
- "string"
]
}
sparkId required | string The instance Id |
type required | string The instance type |
createdBy required | string User that created the instance |
required | object Any additional details about the instance |
{- "sparkId": "string",
- "type": "string",
- "createdBy": "string",
- "details": {
- "property1": "string",
- "property2": "string"
}
}
uploadId required | string The ID of this upload |
comment required | string Comment or description. Used only for your own reference purposes. |
secretNames required | Array of strings Secret names |
{- "uploadId": "string",
- "comment": "string",
- "secretNames": [
- "string"
]
}
time required | string Time of the event |
type required | string Event type |
reason required | string Event reason |
name required | string Event name |
action required | string Event action |
{- "time": "string",
- "type": "string",
- "reason": "string",
- "name": "string",
- "action": "string"
}
resourcePoolId required | string The resource pool name |
userIds required | Array of strings Set of users with this role |
{- "resourcePoolId": "string",
- "userIds": [
- "string"
]
}
sparkId required | string The instance Id |
serverSparkVersion required | string The Spark version used |
{- "sparkId": "string",
- "serverSparkVersion": "string"
}
totalMemory required | string Total available memory (as a Spark quantity string) |
totalCores required | integer <int32> Total available virtual cores |
required | Array of objects The set of resource pools |
{- "totalMemory": "string",
- "totalCores": 0,
- "resourcePools": [
- {
- "resourcePoolId": "string",
- "priority": 0,
- "maxApplications": 0,
- "minMemory": "string",
- "minCores": 0,
- "maxMemory": "string",
- "maxCores": 0,
- "defaultMemoryPerJob": "string",
- "defaultCoresPerJob": 0,
- "defaultExecutorsPerJob": 0,
- "maxMemoryPerJob": "string",
- "maxCoresPerJob": 0,
- "maxExecutorsPerJob": 0
}
]
}
reason required | string Error reason/detail (read only) |
required | object Any additional details (read only) |
{- "reason": "string",
- "metadata": {
- "property1": "string",
- "property2": "string"
}
}
required | Array of objects (FieldViolations) Field violations (read only) |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}
resourceName required | string Name of the resource (read only) |
description required | string Violation description (read only) |
{- "resourceName": "string",
- "description": "string"
}
required | Array of objects (FieldViolations) Field violations (read only) |
{- "fieldViolations": [
- {
- "field": "string",
- "description": "string"
}
]
}