public static interface JobRun.Builder extends SdkPojo, CopyableBuilder<JobRun.Builder,JobRun>
| Modifier and Type | Method and Description |
|---|---|
JobRun.Builder |
allocatedCapacity(Integer allocatedCapacity)
Deprecated.
This property is deprecated, use MaxCapacity instead.
|
JobRun.Builder |
arguments(Map<String,String> arguments)
The job arguments associated with this run.
|
JobRun.Builder |
attempt(Integer attempt)
The number of the attempt to run this job.
|
JobRun.Builder |
completedOn(Instant completedOn)
The date and time that this job run completed.
|
JobRun.Builder |
dpuSeconds(Double dpuSeconds)
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during
the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for
G.1X, 2 for
G.2X, or 0.25 for G.025X workers). |
JobRun.Builder |
errorMessage(String errorMessage)
An error message associated with this job run.
|
JobRun.Builder |
executionClass(ExecutionClass executionClass)
Indicates whether the job is run with a standard or flexible execution class.
|
JobRun.Builder |
executionClass(String executionClass)
Indicates whether the job is run with a standard or flexible execution class.
|
JobRun.Builder |
executionTime(Integer executionTime)
The amount of time (in seconds) that the job run consumed resources.
|
JobRun.Builder |
glueVersion(String glueVersion)
In Spark jobs,
GlueVersion determines the versions of Apache Spark and Python that Glue
available in a job. |
JobRun.Builder |
id(String id)
The ID of this job run.
|
JobRun.Builder |
jobName(String jobName)
The name of the job definition being used in this run.
|
JobRun.Builder |
jobRunState(JobRunState jobRunState)
The current state of the job run.
|
JobRun.Builder |
jobRunState(String jobRunState)
The current state of the job run.
|
JobRun.Builder |
lastModifiedOn(Instant lastModifiedOn)
The last time that this job run was modified.
|
JobRun.Builder |
logGroupName(String logGroupName)
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using
KMS.
|
JobRun.Builder |
maxCapacity(Double maxCapacity)
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing
units (DPUs) that can be allocated when this job runs.
|
default JobRun.Builder |
notificationProperty(Consumer<NotificationProperty.Builder> notificationProperty)
Specifies configuration properties of a job run notification.
|
JobRun.Builder |
notificationProperty(NotificationProperty notificationProperty)
Specifies configuration properties of a job run notification.
|
JobRun.Builder |
numberOfWorkers(Integer numberOfWorkers)
The number of workers of a defined
workerType that are allocated when a job runs. |
JobRun.Builder |
predecessorRuns(Collection<Predecessor> predecessorRuns)
A list of predecessors to this job run.
|
JobRun.Builder |
predecessorRuns(Consumer<Predecessor.Builder>... predecessorRuns)
A list of predecessors to this job run.
|
JobRun.Builder |
predecessorRuns(Predecessor... predecessorRuns)
A list of predecessors to this job run.
|
JobRun.Builder |
previousRunId(String previousRunId)
The ID of the previous run of this job.
|
JobRun.Builder |
securityConfiguration(String securityConfiguration)
The name of the
SecurityConfiguration structure to be used with this job run. |
JobRun.Builder |
startedOn(Instant startedOn)
The date and time at which this job run was started.
|
JobRun.Builder |
timeout(Integer timeout)
The
JobRun timeout in minutes. |
JobRun.Builder |
triggerName(String triggerName)
The name of the trigger that started this job run.
|
JobRun.Builder |
workerType(String workerType)
The type of predefined worker that is allocated when a job runs.
|
JobRun.Builder |
workerType(WorkerType workerType)
The type of predefined worker that is allocated when a job runs.
|
equalsBySdkFields, sdkFieldscopyapplyMutation, buildJobRun.Builder id(String id)
The ID of this job run.
id - The ID of this job run.JobRun.Builder attempt(Integer attempt)
The number of the attempt to run this job.
attempt - The number of the attempt to run this job.JobRun.Builder previousRunId(String previousRunId)
The ID of the previous run of this job. For example, the JobRunId specified in the
StartJobRun action.
previousRunId - The ID of the previous run of this job. For example, the JobRunId specified in the
StartJobRun action.JobRun.Builder triggerName(String triggerName)
The name of the trigger that started this job run.
triggerName - The name of the trigger that started this job run.JobRun.Builder jobName(String jobName)
The name of the job definition being used in this run.
jobName - The name of the job definition being used in this run.JobRun.Builder startedOn(Instant startedOn)
The date and time at which this job run was started.
startedOn - The date and time at which this job run was started.JobRun.Builder lastModifiedOn(Instant lastModifiedOn)
The last time that this job run was modified.
lastModifiedOn - The last time that this job run was modified.JobRun.Builder completedOn(Instant completedOn)
The date and time that this job run completed.
completedOn - The date and time that this job run completed.JobRun.Builder jobRunState(String jobRunState)
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
jobRunState - The current state of the job run. For more information about the statuses of jobs that have terminated
abnormally, see Glue Job
Run Statuses.JobRunState,
JobRunStateJobRun.Builder jobRunState(JobRunState jobRunState)
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
jobRunState - The current state of the job run. For more information about the statuses of jobs that have terminated
abnormally, see Glue Job
Run Statuses.JobRunState,
JobRunStateJobRun.Builder arguments(Map<String,String> arguments)
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
arguments - The job arguments associated with this run. For this job run, they replace the default arguments set
in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
JobRun.Builder errorMessage(String errorMessage)
An error message associated with this job run.
errorMessage - An error message associated with this job run.JobRun.Builder predecessorRuns(Collection<Predecessor> predecessorRuns)
A list of predecessors to this job run.
predecessorRuns - A list of predecessors to this job run.JobRun.Builder predecessorRuns(Predecessor... predecessorRuns)
A list of predecessors to this job run.
predecessorRuns - A list of predecessors to this job run.JobRun.Builder predecessorRuns(Consumer<Predecessor.Builder>... predecessorRuns)
A list of predecessors to this job run.
This is a convenience method that creates an instance of thePredecessor.Builder avoiding the need to create one
manually via Predecessor.builder().
When the Consumer completes,
SdkBuilder.build() is called immediately and its
result is passed to #predecessorRuns(List.
predecessorRuns - a consumer that will call methods on
Predecessor.Builder#predecessorRuns(java.util.Collection) @Deprecated JobRun.Builder allocatedCapacity(Integer allocatedCapacity)
This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
allocatedCapacity - This field is deprecated. Use MaxCapacity instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
JobRun.Builder executionTime(Integer executionTime)
The amount of time (in seconds) that the job run consumed resources.
executionTime - The amount of time (in seconds) that the job run consumed resources.JobRun.Builder timeout(Integer timeout)
The JobRun timeout in minutes. This is the maximum time that a job run can consume resources
before it is terminated and enters TIMEOUT status. This value overrides the timeout value set in
the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
timeout - The JobRun timeout in minutes. This is the maximum time that a job run can consume
resources before it is terminated and enters TIMEOUT status. This value overrides the
timeout value set in the parent job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
JobRun.Builder maxCapacity(Double maxCapacity)
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity. Instead, you should specify a
Worker type and the Number of workers.
Do not set MaxCapacity if using WorkerType and NumberOfWorkers.
The value that can be allocated for MaxCapacity depends on whether you are running a Python
shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name="pythonshell"), you can allocate either
0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name="glueetl") or Apache Spark streaming
ETL job (JobCommand.Name="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is
10 DPUs. This job type cannot have a fractional DPU allocation.
maxCapacity - For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data
processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of
processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more
information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity. Instead, you should
specify a Worker type and the Number of workers.
Do not set MaxCapacity if using WorkerType and NumberOfWorkers.
The value that can be allocated for MaxCapacity depends on whether you are running a
Python shell job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name="pythonshell"), you can allocate
either 0.0625 or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name="glueetl") or Apache Spark
streaming ETL job (JobCommand.Name="gluestreaming"), you can allocate from 2 to 100 DPUs.
The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
JobRun.Builder workerType(String workerType)
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads
such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads
such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is
available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US
East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia
Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is
available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
supported for the G.4X worker type.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB
disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low
volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
workerType - The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X,
G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB
disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for
workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to
run most jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB
disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for
workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to
run most jobs.
For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with
256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker
type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and
queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the
following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia
Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe
(Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with
512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker
type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and
queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same
Amazon Web Services Regions as supported for the G.4X worker type.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with
84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type
for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128
GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
WorkerType,
WorkerTypeJobRun.Builder workerType(WorkerType workerType)
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads
such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads
such as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is
available only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US
East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia
Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is
available only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as
supported for the G.4X worker type.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB
disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low
volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
workerType - The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X,
G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB
disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for
workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to
run most jobs.
For the G.2X worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB
disk (approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for
workloads such as data transforms, joins, and queries, to offers a scalable and cost effective way to
run most jobs.
For the G.4X worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with
256GB disk (approximately 235GB free), and provides 1 executor per worker. We recommend this worker
type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and
queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs in the
following Amazon Web Services Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia
Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe
(Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with
512GB disk (approximately 487GB free), and provides 1 executor per worker. We recommend this worker
type for jobs whose workloads contain your most demanding transforms, aggregations, joins, and
queries. This worker type is available only for Glue version 3.0 or later Spark ETL jobs, in the same
Amazon Web Services Regions as supported for the G.4X worker type.
For the G.025X worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with
84GB disk (approximately 34GB free), and provides 1 executor per worker. We recommend this worker type
for low volume streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128
GB disk (approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
WorkerType,
WorkerTypeJobRun.Builder numberOfWorkers(Integer numberOfWorkers)
The number of workers of a defined workerType that are allocated when a job runs.
numberOfWorkers - The number of workers of a defined workerType that are allocated when a job runs.JobRun.Builder securityConfiguration(String securityConfiguration)
The name of the SecurityConfiguration structure to be used with this job run.
securityConfiguration - The name of the SecurityConfiguration structure to be used with this job run.JobRun.Builder logGroupName(String logGroupName)
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using
KMS. This name can be /aws-glue/jobs/, in which case the default encryption is NONE
. If you add a role name and SecurityConfiguration name (in other words,
/aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/), then that security configuration is
used to encrypt the log group.
logGroupName - The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch
using KMS. This name can be /aws-glue/jobs/, in which case the default encryption is
NONE. If you add a role name and SecurityConfiguration name (in other words,
/aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/), then that security
configuration is used to encrypt the log group.JobRun.Builder notificationProperty(NotificationProperty notificationProperty)
Specifies configuration properties of a job run notification.
notificationProperty - Specifies configuration properties of a job run notification.default JobRun.Builder notificationProperty(Consumer<NotificationProperty.Builder> notificationProperty)
Specifies configuration properties of a job run notification.
This is a convenience method that creates an instance of theNotificationProperty.Builder avoiding
the need to create one manually via NotificationProperty.builder().
When the Consumer completes, SdkBuilder.build() is called immediately and
its result is passed to notificationProperty(NotificationProperty).
notificationProperty - a consumer that will call methods on NotificationProperty.BuildernotificationProperty(NotificationProperty)JobRun.Builder glueVersion(String glueVersion)
In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue
available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of Ray,
Python and additional libraries available in your Ray job are determined by the Runtime
parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
glueVersion - In Spark jobs, GlueVersion determines the versions of Apache Spark and Python that Glue
available in a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion to 4.0 or greater. However, the versions of
Ray, Python and additional libraries available in your Ray job are determined by the
Runtime parameter of the Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
JobRun.Builder dpuSeconds(Double dpuSeconds)
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during
the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X, 2 for
G.2X, or 0.25 for G.025X workers). This value may be different than the
executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as the
number of executors running at a given time may be less than the MaxCapacity. Therefore, it is
possible that the value of DPUSeconds is less than executionEngineRuntime *
MaxCapacity.
dpuSeconds - This field populates only for Auto Scaling job runs, and represents the total time each executor ran
during the lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X, 2
for G.2X, or 0.25 for G.025X workers). This value may be different than the
executionEngineRuntime * MaxCapacity as in the case of Auto Scaling jobs, as
the number of executors running at a given time may be less than the MaxCapacity.
Therefore, it is possible that the value of DPUSeconds is less than
executionEngineRuntime * MaxCapacity.JobRun.Builder executionClass(String executionClass)
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set
ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.
executionClass - Indicates whether the job is run with a standard or flexible execution class. The standard
execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated
resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set
ExecutionClass to FLEX. The flexible execution class is available for Spark
jobs.
ExecutionClass,
ExecutionClassJobRun.Builder executionClass(ExecutionClass executionClass)
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set
ExecutionClass to FLEX. The flexible execution class is available for Spark jobs.
executionClass - Indicates whether the job is run with a standard or flexible execution class. The standard
execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated
resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl will be allowed to set
ExecutionClass to FLEX. The flexible execution class is available for Spark
jobs.
ExecutionClass,
ExecutionClassCopyright © 2023. All rights reserved.