1. Packages
  2. Google Cloud Native
  3. API Docs
  4. dataproc
  5. dataproc/v1beta2
  6. Cluster

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.dataproc/v1beta2.Cluster

Explore with Pulumi AI

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

Creates a cluster in a project. The returned Operation.metadata will be ClusterOperationMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#clusteroperationmetadata). Auto-naming is currently not supported for this resource.

Create Cluster Resource

Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

Constructor syntax

new Cluster(name: string, args: ClusterArgs, opts?: CustomResourceOptions);
@overload
def Cluster(resource_name: str,
            args: ClusterArgs,
            opts: Optional[ResourceOptions] = None)

@overload
def Cluster(resource_name: str,
            opts: Optional[ResourceOptions] = None,
            cluster_name: Optional[str] = None,
            config: Optional[ClusterConfigArgs] = None,
            region: Optional[str] = None,
            labels: Optional[Mapping[str, str]] = None,
            project: Optional[str] = None,
            request_id: Optional[str] = None)
func NewCluster(ctx *Context, name string, args ClusterArgs, opts ...ResourceOption) (*Cluster, error)
public Cluster(string name, ClusterArgs args, CustomResourceOptions? opts = null)
public Cluster(String name, ClusterArgs args)
public Cluster(String name, ClusterArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1beta2:Cluster
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.

Parameters

name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
resource_name This property is required. str
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts ResourceOptions
Bag of options to control resource's behavior.
ctx Context
Context object for the current deployment.
name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts ResourceOption
Bag of options to control resource's behavior.
name This property is required. string
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
opts CustomResourceOptions
Bag of options to control resource's behavior.
name This property is required. String
The unique name of the resource.
args This property is required. ClusterArgs
The arguments to resource properties.
options CustomResourceOptions
Bag of options to control resource's behavior.

Constructor example

The following reference example uses placeholder values for all input properties.

var exampleclusterResourceResourceFromDataprocv1beta2 = new GoogleNative.Dataproc.V1Beta2.Cluster("exampleclusterResourceResourceFromDataprocv1beta2", new()
{
    ClusterName = "string",
    Config = new GoogleNative.Dataproc.V1Beta2.Inputs.ClusterConfigArgs
    {
        AutoscalingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfigArgs
        {
            PolicyUri = "string",
        },
        ConfigBucket = "string",
        EncryptionConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfigArgs
        {
            GcePdKmsKeyName = "string",
        },
        EndpointConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfigArgs
        {
            EnableHttpPortAccess = false,
        },
        GceClusterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfigArgs
        {
            InternalIpOnly = false,
            Metadata = 
            {
                { "string", "string" },
            },
            NetworkUri = "string",
            NodeGroupAffinity = new GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinityArgs
            {
                NodeGroupUri = "string",
            },
            PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1Beta2.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            ReservationAffinity = new GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinityArgs
            {
                ConsumeReservationType = GoogleNative.Dataproc.V1Beta2.ReservationAffinityConsumeReservationType.TypeUnspecified,
                Key = "string",
                Values = new[]
                {
                    "string",
                },
            },
            ServiceAccount = "string",
            ServiceAccountScopes = new[]
            {
                "string",
            },
            ShieldedInstanceConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfigArgs
            {
                EnableIntegrityMonitoring = false,
                EnableSecureBoot = false,
                EnableVtpm = false,
            },
            SubnetworkUri = "string",
            Tags = new[]
            {
                "string",
            },
            ZoneUri = "string",
        },
        GkeClusterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfigArgs
        {
            NamespacedGkeDeploymentTarget = new GoogleNative.Dataproc.V1Beta2.Inputs.NamespacedGkeDeploymentTargetArgs
            {
                ClusterNamespace = "string",
                TargetGkeCluster = "string",
            },
        },
        InitializationActions = new[]
        {
            new GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationActionArgs
            {
                ExecutableFile = "string",
                ExecutionTimeout = "string",
            },
        },
        LifecycleConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfigArgs
        {
            AutoDeleteTime = "string",
            AutoDeleteTtl = "string",
            IdleDeleteTtl = "string",
        },
        MasterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
        MetastoreConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfigArgs
        {
            DataprocMetastoreService = "string",
        },
        SecondaryWorkerConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
        SecurityConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfigArgs
        {
            KerberosConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.KerberosConfigArgs
            {
                CrossRealmTrustAdminServer = "string",
                CrossRealmTrustKdc = "string",
                CrossRealmTrustRealm = "string",
                CrossRealmTrustSharedPasswordUri = "string",
                EnableKerberos = false,
                KdcDbKeyUri = "string",
                KeyPasswordUri = "string",
                KeystorePasswordUri = "string",
                KeystoreUri = "string",
                KmsKeyUri = "string",
                Realm = "string",
                RootPrincipalPasswordUri = "string",
                TgtLifetimeHours = 0,
                TruststorePasswordUri = "string",
                TruststoreUri = "string",
            },
        },
        SoftwareConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfigArgs
        {
            ImageVersion = "string",
            OptionalComponents = new[]
            {
                GoogleNative.Dataproc.V1Beta2.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
            },
            Properties = 
            {
                { "string", "string" },
            },
        },
        TempBucket = "string",
        WorkerConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
        {
            Accelerators = new[]
            {
                new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
                {
                    AcceleratorCount = 0,
                    AcceleratorTypeUri = "string",
                },
            },
            DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
            {
                BootDiskSizeGb = 0,
                BootDiskType = "string",
                NumLocalSsds = 0,
            },
            ImageUri = "string",
            MachineTypeUri = "string",
            MinCpuPlatform = "string",
            NumInstances = 0,
            Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
    },
    Region = "string",
    Labels = 
    {
        { "string", "string" },
    },
    Project = "string",
    RequestId = "string",
});
Copy
example, err := dataprocv1beta2.NewCluster(ctx, "exampleclusterResourceResourceFromDataprocv1beta2", &dataprocv1beta2.ClusterArgs{
	ClusterName: pulumi.String("string"),
	Config: &dataproc.ClusterConfigArgs{
		AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
			PolicyUri: pulumi.String("string"),
		},
		ConfigBucket: pulumi.String("string"),
		EncryptionConfig: &dataproc.EncryptionConfigArgs{
			GcePdKmsKeyName: pulumi.String("string"),
		},
		EndpointConfig: &dataproc.EndpointConfigArgs{
			EnableHttpPortAccess: pulumi.Bool(false),
		},
		GceClusterConfig: &dataproc.GceClusterConfigArgs{
			InternalIpOnly: pulumi.Bool(false),
			Metadata: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
			NetworkUri: pulumi.String("string"),
			NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
				NodeGroupUri: pulumi.String("string"),
			},
			PrivateIpv6GoogleAccess: dataprocv1beta2.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
			ReservationAffinity: &dataproc.ReservationAffinityArgs{
				ConsumeReservationType: dataprocv1beta2.ReservationAffinityConsumeReservationTypeTypeUnspecified,
				Key:                    pulumi.String("string"),
				Values: pulumi.StringArray{
					pulumi.String("string"),
				},
			},
			ServiceAccount: pulumi.String("string"),
			ServiceAccountScopes: pulumi.StringArray{
				pulumi.String("string"),
			},
			ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
				EnableIntegrityMonitoring: pulumi.Bool(false),
				EnableSecureBoot:          pulumi.Bool(false),
				EnableVtpm:                pulumi.Bool(false),
			},
			SubnetworkUri: pulumi.String("string"),
			Tags: pulumi.StringArray{
				pulumi.String("string"),
			},
			ZoneUri: pulumi.String("string"),
		},
		GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
			NamespacedGkeDeploymentTarget: &dataproc.NamespacedGkeDeploymentTargetArgs{
				ClusterNamespace: pulumi.String("string"),
				TargetGkeCluster: pulumi.String("string"),
			},
		},
		InitializationActions: dataproc.NodeInitializationActionArray{
			&dataproc.NodeInitializationActionArgs{
				ExecutableFile:   pulumi.String("string"),
				ExecutionTimeout: pulumi.String("string"),
			},
		},
		LifecycleConfig: &dataproc.LifecycleConfigArgs{
			AutoDeleteTime: pulumi.String("string"),
			AutoDeleteTtl:  pulumi.String("string"),
			IdleDeleteTtl:  pulumi.String("string"),
		},
		MasterConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb: pulumi.Int(0),
				BootDiskType:   pulumi.String("string"),
				NumLocalSsds:   pulumi.Int(0),
			},
			ImageUri:       pulumi.String("string"),
			MachineTypeUri: pulumi.String("string"),
			MinCpuPlatform: pulumi.String("string"),
			NumInstances:   pulumi.Int(0),
			Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
		},
		MetastoreConfig: &dataproc.MetastoreConfigArgs{
			DataprocMetastoreService: pulumi.String("string"),
		},
		SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb: pulumi.Int(0),
				BootDiskType:   pulumi.String("string"),
				NumLocalSsds:   pulumi.Int(0),
			},
			ImageUri:       pulumi.String("string"),
			MachineTypeUri: pulumi.String("string"),
			MinCpuPlatform: pulumi.String("string"),
			NumInstances:   pulumi.Int(0),
			Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
		},
		SecurityConfig: &dataproc.SecurityConfigArgs{
			KerberosConfig: &dataproc.KerberosConfigArgs{
				CrossRealmTrustAdminServer:       pulumi.String("string"),
				CrossRealmTrustKdc:               pulumi.String("string"),
				CrossRealmTrustRealm:             pulumi.String("string"),
				CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
				EnableKerberos:                   pulumi.Bool(false),
				KdcDbKeyUri:                      pulumi.String("string"),
				KeyPasswordUri:                   pulumi.String("string"),
				KeystorePasswordUri:              pulumi.String("string"),
				KeystoreUri:                      pulumi.String("string"),
				KmsKeyUri:                        pulumi.String("string"),
				Realm:                            pulumi.String("string"),
				RootPrincipalPasswordUri:         pulumi.String("string"),
				TgtLifetimeHours:                 pulumi.Int(0),
				TruststorePasswordUri:            pulumi.String("string"),
				TruststoreUri:                    pulumi.String("string"),
			},
		},
		SoftwareConfig: &dataproc.SoftwareConfigArgs{
			ImageVersion: pulumi.String("string"),
			OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
				dataprocv1beta2.SoftwareConfigOptionalComponentsItemComponentUnspecified,
			},
			Properties: pulumi.StringMap{
				"string": pulumi.String("string"),
			},
		},
		TempBucket: pulumi.String("string"),
		WorkerConfig: &dataproc.InstanceGroupConfigArgs{
			Accelerators: dataproc.AcceleratorConfigArray{
				&dataproc.AcceleratorConfigArgs{
					AcceleratorCount:   pulumi.Int(0),
					AcceleratorTypeUri: pulumi.String("string"),
				},
			},
			DiskConfig: &dataproc.DiskConfigArgs{
				BootDiskSizeGb: pulumi.Int(0),
				BootDiskType:   pulumi.String("string"),
				NumLocalSsds:   pulumi.Int(0),
			},
			ImageUri:       pulumi.String("string"),
			MachineTypeUri: pulumi.String("string"),
			MinCpuPlatform: pulumi.String("string"),
			NumInstances:   pulumi.Int(0),
			Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
		},
	},
	Region: pulumi.String("string"),
	Labels: pulumi.StringMap{
		"string": pulumi.String("string"),
	},
	Project:   pulumi.String("string"),
	RequestId: pulumi.String("string"),
})
Copy
var exampleclusterResourceResourceFromDataprocv1beta2 = new Cluster("exampleclusterResourceResourceFromDataprocv1beta2", ClusterArgs.builder()
    .clusterName("string")
    .config(ClusterConfigArgs.builder()
        .autoscalingConfig(AutoscalingConfigArgs.builder()
            .policyUri("string")
            .build())
        .configBucket("string")
        .encryptionConfig(EncryptionConfigArgs.builder()
            .gcePdKmsKeyName("string")
            .build())
        .endpointConfig(EndpointConfigArgs.builder()
            .enableHttpPortAccess(false)
            .build())
        .gceClusterConfig(GceClusterConfigArgs.builder()
            .internalIpOnly(false)
            .metadata(Map.of("string", "string"))
            .networkUri("string")
            .nodeGroupAffinity(NodeGroupAffinityArgs.builder()
                .nodeGroupUri("string")
                .build())
            .privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
            .reservationAffinity(ReservationAffinityArgs.builder()
                .consumeReservationType("TYPE_UNSPECIFIED")
                .key("string")
                .values("string")
                .build())
            .serviceAccount("string")
            .serviceAccountScopes("string")
            .shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
                .enableIntegrityMonitoring(false)
                .enableSecureBoot(false)
                .enableVtpm(false)
                .build())
            .subnetworkUri("string")
            .tags("string")
            .zoneUri("string")
            .build())
        .gkeClusterConfig(GkeClusterConfigArgs.builder()
            .namespacedGkeDeploymentTarget(NamespacedGkeDeploymentTargetArgs.builder()
                .clusterNamespace("string")
                .targetGkeCluster("string")
                .build())
            .build())
        .initializationActions(NodeInitializationActionArgs.builder()
            .executableFile("string")
            .executionTimeout("string")
            .build())
        .lifecycleConfig(LifecycleConfigArgs.builder()
            .autoDeleteTime("string")
            .autoDeleteTtl("string")
            .idleDeleteTtl("string")
            .build())
        .masterConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .build())
        .metastoreConfig(MetastoreConfigArgs.builder()
            .dataprocMetastoreService("string")
            .build())
        .secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .build())
        .securityConfig(SecurityConfigArgs.builder()
            .kerberosConfig(KerberosConfigArgs.builder()
                .crossRealmTrustAdminServer("string")
                .crossRealmTrustKdc("string")
                .crossRealmTrustRealm("string")
                .crossRealmTrustSharedPasswordUri("string")
                .enableKerberos(false)
                .kdcDbKeyUri("string")
                .keyPasswordUri("string")
                .keystorePasswordUri("string")
                .keystoreUri("string")
                .kmsKeyUri("string")
                .realm("string")
                .rootPrincipalPasswordUri("string")
                .tgtLifetimeHours(0)
                .truststorePasswordUri("string")
                .truststoreUri("string")
                .build())
            .build())
        .softwareConfig(SoftwareConfigArgs.builder()
            .imageVersion("string")
            .optionalComponents("COMPONENT_UNSPECIFIED")
            .properties(Map.of("string", "string"))
            .build())
        .tempBucket("string")
        .workerConfig(InstanceGroupConfigArgs.builder()
            .accelerators(AcceleratorConfigArgs.builder()
                .acceleratorCount(0)
                .acceleratorTypeUri("string")
                .build())
            .diskConfig(DiskConfigArgs.builder()
                .bootDiskSizeGb(0)
                .bootDiskType("string")
                .numLocalSsds(0)
                .build())
            .imageUri("string")
            .machineTypeUri("string")
            .minCpuPlatform("string")
            .numInstances(0)
            .preemptibility("PREEMPTIBILITY_UNSPECIFIED")
            .build())
        .build())
    .region("string")
    .labels(Map.of("string", "string"))
    .project("string")
    .requestId("string")
    .build());
Copy
examplecluster_resource_resource_from_dataprocv1beta2 = google_native.dataproc.v1beta2.Cluster("exampleclusterResourceResourceFromDataprocv1beta2",
    cluster_name="string",
    config={
        "autoscaling_config": {
            "policy_uri": "string",
        },
        "config_bucket": "string",
        "encryption_config": {
            "gce_pd_kms_key_name": "string",
        },
        "endpoint_config": {
            "enable_http_port_access": False,
        },
        "gce_cluster_config": {
            "internal_ip_only": False,
            "metadata": {
                "string": "string",
            },
            "network_uri": "string",
            "node_group_affinity": {
                "node_group_uri": "string",
            },
            "private_ipv6_google_access": google_native.dataproc.v1beta2.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
            "reservation_affinity": {
                "consume_reservation_type": google_native.dataproc.v1beta2.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
                "key": "string",
                "values": ["string"],
            },
            "service_account": "string",
            "service_account_scopes": ["string"],
            "shielded_instance_config": {
                "enable_integrity_monitoring": False,
                "enable_secure_boot": False,
                "enable_vtpm": False,
            },
            "subnetwork_uri": "string",
            "tags": ["string"],
            "zone_uri": "string",
        },
        "gke_cluster_config": {
            "namespaced_gke_deployment_target": {
                "cluster_namespace": "string",
                "target_gke_cluster": "string",
            },
        },
        "initialization_actions": [{
            "executable_file": "string",
            "execution_timeout": "string",
        }],
        "lifecycle_config": {
            "auto_delete_time": "string",
            "auto_delete_ttl": "string",
            "idle_delete_ttl": "string",
        },
        "master_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
        },
        "metastore_config": {
            "dataproc_metastore_service": "string",
        },
        "secondary_worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
        },
        "security_config": {
            "kerberos_config": {
                "cross_realm_trust_admin_server": "string",
                "cross_realm_trust_kdc": "string",
                "cross_realm_trust_realm": "string",
                "cross_realm_trust_shared_password_uri": "string",
                "enable_kerberos": False,
                "kdc_db_key_uri": "string",
                "key_password_uri": "string",
                "keystore_password_uri": "string",
                "keystore_uri": "string",
                "kms_key_uri": "string",
                "realm": "string",
                "root_principal_password_uri": "string",
                "tgt_lifetime_hours": 0,
                "truststore_password_uri": "string",
                "truststore_uri": "string",
            },
        },
        "software_config": {
            "image_version": "string",
            "optional_components": [google_native.dataproc.v1beta2.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
            "properties": {
                "string": "string",
            },
        },
        "temp_bucket": "string",
        "worker_config": {
            "accelerators": [{
                "accelerator_count": 0,
                "accelerator_type_uri": "string",
            }],
            "disk_config": {
                "boot_disk_size_gb": 0,
                "boot_disk_type": "string",
                "num_local_ssds": 0,
            },
            "image_uri": "string",
            "machine_type_uri": "string",
            "min_cpu_platform": "string",
            "num_instances": 0,
            "preemptibility": google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
        },
    },
    region="string",
    labels={
        "string": "string",
    },
    project="string",
    request_id="string")
Copy
const exampleclusterResourceResourceFromDataprocv1beta2 = new google_native.dataproc.v1beta2.Cluster("exampleclusterResourceResourceFromDataprocv1beta2", {
    clusterName: "string",
    config: {
        autoscalingConfig: {
            policyUri: "string",
        },
        configBucket: "string",
        encryptionConfig: {
            gcePdKmsKeyName: "string",
        },
        endpointConfig: {
            enableHttpPortAccess: false,
        },
        gceClusterConfig: {
            internalIpOnly: false,
            metadata: {
                string: "string",
            },
            networkUri: "string",
            nodeGroupAffinity: {
                nodeGroupUri: "string",
            },
            privateIpv6GoogleAccess: google_native.dataproc.v1beta2.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
            reservationAffinity: {
                consumeReservationType: google_native.dataproc.v1beta2.ReservationAffinityConsumeReservationType.TypeUnspecified,
                key: "string",
                values: ["string"],
            },
            serviceAccount: "string",
            serviceAccountScopes: ["string"],
            shieldedInstanceConfig: {
                enableIntegrityMonitoring: false,
                enableSecureBoot: false,
                enableVtpm: false,
            },
            subnetworkUri: "string",
            tags: ["string"],
            zoneUri: "string",
        },
        gkeClusterConfig: {
            namespacedGkeDeploymentTarget: {
                clusterNamespace: "string",
                targetGkeCluster: "string",
            },
        },
        initializationActions: [{
            executableFile: "string",
            executionTimeout: "string",
        }],
        lifecycleConfig: {
            autoDeleteTime: "string",
            autoDeleteTtl: "string",
            idleDeleteTtl: "string",
        },
        masterConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            machineTypeUri: "string",
            minCpuPlatform: "string",
            numInstances: 0,
            preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
        metastoreConfig: {
            dataprocMetastoreService: "string",
        },
        secondaryWorkerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            machineTypeUri: "string",
            minCpuPlatform: "string",
            numInstances: 0,
            preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
        securityConfig: {
            kerberosConfig: {
                crossRealmTrustAdminServer: "string",
                crossRealmTrustKdc: "string",
                crossRealmTrustRealm: "string",
                crossRealmTrustSharedPasswordUri: "string",
                enableKerberos: false,
                kdcDbKeyUri: "string",
                keyPasswordUri: "string",
                keystorePasswordUri: "string",
                keystoreUri: "string",
                kmsKeyUri: "string",
                realm: "string",
                rootPrincipalPasswordUri: "string",
                tgtLifetimeHours: 0,
                truststorePasswordUri: "string",
                truststoreUri: "string",
            },
        },
        softwareConfig: {
            imageVersion: "string",
            optionalComponents: [google_native.dataproc.v1beta2.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
            properties: {
                string: "string",
            },
        },
        tempBucket: "string",
        workerConfig: {
            accelerators: [{
                acceleratorCount: 0,
                acceleratorTypeUri: "string",
            }],
            diskConfig: {
                bootDiskSizeGb: 0,
                bootDiskType: "string",
                numLocalSsds: 0,
            },
            imageUri: "string",
            machineTypeUri: "string",
            minCpuPlatform: "string",
            numInstances: 0,
            preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
        },
    },
    region: "string",
    labels: {
        string: "string",
    },
    project: "string",
    requestId: "string",
});
Copy
type: google-native:dataproc/v1beta2:Cluster
properties:
    clusterName: string
    config:
        autoscalingConfig:
            policyUri: string
        configBucket: string
        encryptionConfig:
            gcePdKmsKeyName: string
        endpointConfig:
            enableHttpPortAccess: false
        gceClusterConfig:
            internalIpOnly: false
            metadata:
                string: string
            networkUri: string
            nodeGroupAffinity:
                nodeGroupUri: string
            privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
            reservationAffinity:
                consumeReservationType: TYPE_UNSPECIFIED
                key: string
                values:
                    - string
            serviceAccount: string
            serviceAccountScopes:
                - string
            shieldedInstanceConfig:
                enableIntegrityMonitoring: false
                enableSecureBoot: false
                enableVtpm: false
            subnetworkUri: string
            tags:
                - string
            zoneUri: string
        gkeClusterConfig:
            namespacedGkeDeploymentTarget:
                clusterNamespace: string
                targetGkeCluster: string
        initializationActions:
            - executableFile: string
              executionTimeout: string
        lifecycleConfig:
            autoDeleteTime: string
            autoDeleteTtl: string
            idleDeleteTtl: string
        masterConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                numLocalSsds: 0
            imageUri: string
            machineTypeUri: string
            minCpuPlatform: string
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
        metastoreConfig:
            dataprocMetastoreService: string
        secondaryWorkerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                numLocalSsds: 0
            imageUri: string
            machineTypeUri: string
            minCpuPlatform: string
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
        securityConfig:
            kerberosConfig:
                crossRealmTrustAdminServer: string
                crossRealmTrustKdc: string
                crossRealmTrustRealm: string
                crossRealmTrustSharedPasswordUri: string
                enableKerberos: false
                kdcDbKeyUri: string
                keyPasswordUri: string
                keystorePasswordUri: string
                keystoreUri: string
                kmsKeyUri: string
                realm: string
                rootPrincipalPasswordUri: string
                tgtLifetimeHours: 0
                truststorePasswordUri: string
                truststoreUri: string
        softwareConfig:
            imageVersion: string
            optionalComponents:
                - COMPONENT_UNSPECIFIED
            properties:
                string: string
        tempBucket: string
        workerConfig:
            accelerators:
                - acceleratorCount: 0
                  acceleratorTypeUri: string
            diskConfig:
                bootDiskSizeGb: 0
                bootDiskType: string
                numLocalSsds: 0
            imageUri: string
            machineTypeUri: string
            minCpuPlatform: string
            numInstances: 0
            preemptibility: PREEMPTIBILITY_UNSPECIFIED
    labels:
        string: string
    project: string
    region: string
    requestId: string
Copy

Cluster Resource Properties

To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

Inputs

In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

The Cluster resource accepts the following input properties:

ClusterName This property is required. string
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
Config This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ClusterConfig
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
Region
This property is required.
Changes to this property will trigger replacement.
string
Labels Dictionary<string, string>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
Project string
The Google Cloud Platform project ID that the cluster belongs to.
RequestId string
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
ClusterName This property is required. string
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
Config This property is required. ClusterConfigArgs
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
Region
This property is required.
Changes to this property will trigger replacement.
string
Labels map[string]string
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
Project string
The Google Cloud Platform project ID that the cluster belongs to.
RequestId string
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
clusterName This property is required. String
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
config This property is required. ClusterConfig
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
region
This property is required.
Changes to this property will trigger replacement.
String
labels Map<String,String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project String
The Google Cloud Platform project ID that the cluster belongs to.
requestId String
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
clusterName This property is required. string
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
config This property is required. ClusterConfig
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
region
This property is required.
Changes to this property will trigger replacement.
string
labels {[key: string]: string}
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project string
The Google Cloud Platform project ID that the cluster belongs to.
requestId string
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
cluster_name This property is required. str
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
config This property is required. ClusterConfigArgs
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
region
This property is required.
Changes to this property will trigger replacement.
str
labels Mapping[str, str]
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project str
The Google Cloud Platform project ID that the cluster belongs to.
request_id str
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
clusterName This property is required. String
The cluster name. Cluster names within a project must be unique. Names of deleted clusters can be reused.
config This property is required. Property Map
The cluster config. Note that Dataproc may set default values, and values may change when clusters are updated.
region
This property is required.
Changes to this property will trigger replacement.
String
labels Map<String>
Optional. The labels to associate with this cluster. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a cluster.
project String
The Google Cloud Platform project ID that the cluster belongs to.
requestId String
Optional. A unique id used to identify the request. If the server receives two CreateClusterRequest (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1beta2#google.cloud.dataproc.v1beta2.CreateClusterRequest)s with the same id, then the second request will be ignored and the first google.longrunning.Operation created and stored in the backend is returned.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.

Outputs

All input properties are implicitly available as output properties. Additionally, the Cluster resource produces the following output properties:

ClusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
Id string
The provider-assigned unique ID for this managed resource.
Metrics Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Status Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterStatusResponse
Cluster status.
StatusHistory List<Pulumi.GoogleNative.Dataproc.V1Beta2.Outputs.ClusterStatusResponse>
The previous cluster status.
ClusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
Id string
The provider-assigned unique ID for this managed resource.
Metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
Status ClusterStatusResponse
Cluster status.
StatusHistory []ClusterStatusResponse
The previous cluster status.
clusterUuid String
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id String
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
statusHistory List<ClusterStatusResponse>
The previous cluster status.
clusterUuid string
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id string
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
statusHistory ClusterStatusResponse[]
The previous cluster status.
cluster_uuid str
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id str
The provider-assigned unique ID for this managed resource.
metrics ClusterMetricsResponse
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status ClusterStatusResponse
Cluster status.
status_history Sequence[ClusterStatusResponse]
The previous cluster status.
clusterUuid String
A cluster UUID (Unique Universal Identifier). Dataproc generates this value when it creates the cluster.
id String
The provider-assigned unique ID for this managed resource.
metrics Property Map
Contains cluster daemon metrics such as HDFS and YARN stats.Beta Feature: This report is available for testing purposes only. It may be changed before final release.
status Property Map
Cluster status.
statusHistory List<Property Map>
The previous cluster status.

Supporting Types

AcceleratorConfig
, AcceleratorConfigArgs

AcceleratorCount int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorCount int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount Integer
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
accelerator_count int
The number of the accelerator cards of this type exposed to this instance.
accelerator_type_uri str
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount Number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AcceleratorConfigResponse
, AcceleratorConfigResponseArgs

AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorCount This property is required. int
The number of the accelerator cards of this type exposed to this instance.
AcceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Integer
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. string
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
accelerator_count This property is required. int
The number of the accelerator cards of this type exposed to this instance.
accelerator_type_uri This property is required. str
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
acceleratorCount This property is required. Number
The number of the accelerator cards of this type exposed to this instance.
acceleratorTypeUri This property is required. String
Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.

AutoscalingConfig
, AutoscalingConfigArgs

PolicyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
PolicyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policy_uri str
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

AutoscalingConfigResponse
, AutoscalingConfigResponseArgs

PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
PolicyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. string
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policy_uri This property is required. str
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
policyUri This property is required. String
Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.

ClusterConfig
, ClusterConfigArgs

AutoscalingConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfig
Optional. Encryption settings for the cluster.
EndpointConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfig
Optional. Port/endpoint configuration for this cluster
GceClusterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfig
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationAction>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfig
Optional. The config setting for auto delete cluster schedule.
MasterConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfig
Optional. Metastore configuration.
SecondaryWorkerConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfig
Optional. Security related configuration.
SoftwareConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfig
Optional. The config settings for software inside the cluster.
TempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfig
Optional. The Compute Engine config settings for worker instances in a cluster.
AutoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
EndpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
GceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig GkeClusterConfig
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions []NodeInitializationAction
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig LifecycleConfig
Optional. The config setting for auto delete cluster schedule.
MasterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig MetastoreConfig
Optional. Metastore configuration.
SecondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig SecurityConfig
Optional. Security related configuration.
SoftwareConfig SoftwareConfig
Optional. The config settings for software inside the cluster.
TempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
endpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
gceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig GkeClusterConfig
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions List<NodeInitializationAction>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig LifecycleConfig
Optional. The config setting for auto delete cluster schedule.
masterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig MetastoreConfig
Optional. Metastore configuration.
secondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig SecurityConfig
Optional. Security related configuration.
softwareConfig SoftwareConfig
Optional. The config settings for software inside the cluster.
tempBucket String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig EncryptionConfig
Optional. Encryption settings for the cluster.
endpointConfig EndpointConfig
Optional. Port/endpoint configuration for this cluster
gceClusterConfig GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig GkeClusterConfig
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions NodeInitializationAction[]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig LifecycleConfig
Optional. The config setting for auto delete cluster schedule.
masterConfig InstanceGroupConfig
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig MetastoreConfig
Optional. Metastore configuration.
secondaryWorkerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig SecurityConfig
Optional. Security related configuration.
softwareConfig SoftwareConfig
Optional. The config settings for software inside the cluster.
tempBucket string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig InstanceGroupConfig
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscaling_config AutoscalingConfig
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
config_bucket str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryption_config EncryptionConfig
Optional. Encryption settings for the cluster.
endpoint_config EndpointConfig
Optional. Port/endpoint configuration for this cluster
gce_cluster_config GceClusterConfig
Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config GkeClusterConfig
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions Sequence[NodeInitializationAction]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycle_config LifecycleConfig
Optional. The config setting for auto delete cluster schedule.
master_config InstanceGroupConfig
Optional. The Compute Engine config settings for the master instance in a cluster.
metastore_config MetastoreConfig
Optional. Metastore configuration.
secondary_worker_config InstanceGroupConfig
Optional. The Compute Engine config settings for additional worker instances in a cluster.
security_config SecurityConfig
Optional. Security related configuration.
software_config SoftwareConfig
Optional. The config settings for software inside the cluster.
temp_bucket str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
worker_config InstanceGroupConfig
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig Property Map
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig Property Map
Optional. Encryption settings for the cluster.
endpointConfig Property Map
Optional. Port/endpoint configuration for this cluster
gceClusterConfig Property Map
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig Property Map
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions List<Property Map>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig Property Map
Optional. The config setting for auto delete cluster schedule.
masterConfig Property Map
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig Property Map
Optional. Metastore configuration.
secondaryWorkerConfig Property Map
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig Property Map
Optional. Security related configuration.
softwareConfig Property Map
Optional. The config settings for software inside the cluster.
tempBucket String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig Property Map
Optional. The Compute Engine config settings for worker instances in a cluster.

ClusterConfigResponse
, ClusterConfigResponseArgs

AutoscalingConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
MasterConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfigResponse
Optional. Security related configuration.
SoftwareConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
AutoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
ConfigBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
EncryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
EndpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
GceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
GkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
InitializationActions This property is required. []NodeInitializationActionResponse
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
LifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
MasterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
MetastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
SecondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
SecurityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
SoftwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
TempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
WorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<NodeInitializationActionResponse>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. string
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpointConfig This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. NodeInitializationActionResponse[]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. SecurityConfigResponse
Optional. Security related configuration.
softwareConfig This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
tempBucket This property is required. string
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscaling_config This property is required. AutoscalingConfigResponse
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
config_bucket This property is required. str
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryption_config This property is required. EncryptionConfigResponse
Optional. Encryption settings for the cluster.
endpoint_config This property is required. EndpointConfigResponse
Optional. Port/endpoint configuration for this cluster
gce_cluster_config This property is required. GceClusterConfigResponse
Optional. The shared Compute Engine config settings for all instances in a cluster.
gke_cluster_config This property is required. GkeClusterConfigResponse
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initialization_actions This property is required. Sequence[NodeInitializationActionResponse]
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycle_config This property is required. LifecycleConfigResponse
Optional. The config setting for auto delete cluster schedule.
master_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for the master instance in a cluster.
metastore_config This property is required. MetastoreConfigResponse
Optional. Metastore configuration.
secondary_worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for additional worker instances in a cluster.
security_config This property is required. SecurityConfigResponse
Optional. Security related configuration.
software_config This property is required. SoftwareConfigResponse
Optional. The config settings for software inside the cluster.
temp_bucket This property is required. str
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
worker_config This property is required. InstanceGroupConfigResponse
Optional. The Compute Engine config settings for worker instances in a cluster.
autoscalingConfig This property is required. Property Map
Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
configBucket This property is required. String
Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
encryptionConfig This property is required. Property Map
Optional. Encryption settings for the cluster.
endpointConfig This property is required. Property Map
Optional. Port/endpoint configuration for this cluster
gceClusterConfig This property is required. Property Map
Optional. The shared Compute Engine config settings for all instances in a cluster.
gkeClusterConfig This property is required. Property Map
Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
initializationActions This property is required. List<Property Map>
Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
lifecycleConfig This property is required. Property Map
Optional. The config setting for auto delete cluster schedule.
masterConfig This property is required. Property Map
Optional. The Compute Engine config settings for the master instance in a cluster.
metastoreConfig This property is required. Property Map
Optional. Metastore configuration.
secondaryWorkerConfig This property is required. Property Map
Optional. The Compute Engine config settings for additional worker instances in a cluster.
securityConfig This property is required. Property Map
Optional. Security related configuration.
softwareConfig This property is required. Property Map
Optional. The config settings for software inside the cluster.
tempBucket This property is required. String
Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
workerConfig This property is required. Property Map
Optional. The Compute Engine config settings for worker instances in a cluster.

ClusterMetricsResponse
, ClusterMetricsResponseArgs

HdfsMetrics This property is required. Dictionary<string, string>
The HDFS metrics.
YarnMetrics This property is required. Dictionary<string, string>
The YARN metrics.
HdfsMetrics This property is required. map[string]string
The HDFS metrics.
YarnMetrics This property is required. map[string]string
The YARN metrics.
hdfsMetrics This property is required. Map<String,String>
The HDFS metrics.
yarnMetrics This property is required. Map<String,String>
The YARN metrics.
hdfsMetrics This property is required. {[key: string]: string}
The HDFS metrics.
yarnMetrics This property is required. {[key: string]: string}
The YARN metrics.
hdfs_metrics This property is required. Mapping[str, str]
The HDFS metrics.
yarn_metrics This property is required. Mapping[str, str]
The YARN metrics.
hdfsMetrics This property is required. Map<String>
The HDFS metrics.
yarnMetrics This property is required. Map<String>
The YARN metrics.

ClusterStatusResponse
, ClusterStatusResponseArgs

Detail This property is required. string
Optional details of cluster's state.
State This property is required. string
The cluster's state.
StateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
Substate This property is required. string
Additional state information that includes status reported by the agent.
Detail This property is required. string
Optional details of cluster's state.
State This property is required. string
The cluster's state.
StateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
Substate This property is required. string
Additional state information that includes status reported by the agent.
detail This property is required. String
Optional details of cluster's state.
state This property is required. String
The cluster's state.
stateStartTime This property is required. String
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. String
Additional state information that includes status reported by the agent.
detail This property is required. string
Optional details of cluster's state.
state This property is required. string
The cluster's state.
stateStartTime This property is required. string
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. string
Additional state information that includes status reported by the agent.
detail This property is required. str
Optional details of cluster's state.
state This property is required. str
The cluster's state.
state_start_time This property is required. str
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. str
Additional state information that includes status reported by the agent.
detail This property is required. String
Optional details of cluster's state.
state This property is required. String
The cluster's state.
stateStartTime This property is required. String
Time when this state was entered (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
substate This property is required. String
Additional state information that includes status reported by the agent.

DiskConfig
, DiskConfigArgs

BootDiskSizeGb int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
BootDiskSizeGb int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb Integer
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds Integer
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
boot_disk_size_gb int
Optional. Size in GB of the boot disk (default is 500GB).
boot_disk_type str
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
num_local_ssds int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb Number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds Number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

DiskConfigResponse
, DiskConfigResponseArgs

BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
BootDiskSizeGb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
BootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
NumLocalSsds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. Integer
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. Integer
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. string
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
boot_disk_size_gb This property is required. int
Optional. Size in GB of the boot disk (default is 500GB).
boot_disk_type This property is required. str
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
num_local_ssds This property is required. int
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
bootDiskSizeGb This property is required. Number
Optional. Size in GB of the boot disk (default is 500GB).
bootDiskType This property is required. String
Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
numLocalSsds This property is required. Number
Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.

EncryptionConfig
, EncryptionConfigArgs

GcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
GcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gce_pd_kms_key_name str
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

EncryptionConfigResponse
, EncryptionConfigResponseArgs

GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
GcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. string
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gce_pd_kms_key_name This property is required. str
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
gcePdKmsKeyName This property is required. String
Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.

EndpointConfig
, EndpointConfigArgs

EnableHttpPortAccess bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EnableHttpPortAccess bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enable_http_port_access bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
enableHttpPortAccess Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.

EndpointConfigResponse
, EndpointConfigResponseArgs

EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. Dictionary<string, string>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
EnableHttpPortAccess This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
HttpPorts This property is required. map[string]string
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String,String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. {[key: string]: string}
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enable_http_port_access This property is required. bool
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
http_ports This property is required. Mapping[str, str]
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
enableHttpPortAccess This property is required. Boolean
Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
httpPorts This property is required. Map<String>
The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.

GceClusterConfig
, GceClusterConfigArgs

InternalIpOnly bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata Dictionary<string, string>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess Pulumi.GoogleNative.Dataproc.V1Beta2.GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
ReservationAffinity Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes List<string>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
InternalIpOnly bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata map[string]string
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
ReservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes []string
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Map<String,String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata {[key: string]: string}
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservationAffinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes string[]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internal_ip_only bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Mapping[str, str]
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
network_uri str
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
node_group_affinity NodeGroupAffinity
Optional. Node Group Affinity for sole-tenant clusters.
private_ipv6_google_access GceClusterConfigPrivateIpv6GoogleAccess
Optional. The type of IPv6 access for a cluster.
reservation_affinity ReservationAffinity
Optional. Reservation Affinity for consuming Zonal reservation.
service_account str
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
service_account_scopes Sequence[str]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shielded_instance_config ShieldedInstanceConfig
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetwork_uri str
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zone_uri str
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata Map<String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity Property Map
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"
Optional. The type of IPv6 access for a cluster.
reservationAffinity Property Map
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig Property Map
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

GceClusterConfigPrivateIpv6GoogleAccess
, GceClusterConfigPrivateIpv6GoogleAccessArgs

PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
GceClusterConfigPrivateIpv6GoogleAccessInheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
GceClusterConfigPrivateIpv6GoogleAccessOutbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
GceClusterConfigPrivateIpv6GoogleAccessBidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PrivateIpv6GoogleAccessUnspecified
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
InheritFromSubnetwork
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
Outbound
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
Bidirectional
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
INHERIT_FROM_SUBNETWORK
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
OUTBOUND
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
BIDIRECTIONAL
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
"PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
"INHERIT_FROM_SUBNETWORK"
INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
"OUTBOUND"
OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
"BIDIRECTIONAL"
BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.

GceClusterConfigResponse
, GceClusterConfigResponseArgs

InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. Dictionary<string, string>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. List<string>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags This property is required. List<string>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
InternalIpOnly This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
Metadata This property is required. map[string]string
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
NetworkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
NodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
PrivateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
ReservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
ServiceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
ServiceAccountScopes This property is required. []string
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
ShieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
SubnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
Tags This property is required. []string
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
ZoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String,String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. {[key: string]: string}
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. string
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. string
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. string
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. string[]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. string
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. string[]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. string
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internal_ip_only This property is required. bool
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Mapping[str, str]
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
network_uri This property is required. str
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
node_group_affinity This property is required. NodeGroupAffinityResponse
Optional. Node Group Affinity for sole-tenant clusters.
private_ipv6_google_access This property is required. str
Optional. The type of IPv6 access for a cluster.
reservation_affinity This property is required. ReservationAffinityResponse
Optional. Reservation Affinity for consuming Zonal reservation.
service_account This property is required. str
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
service_account_scopes This property is required. Sequence[str]
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shielded_instance_config This property is required. ShieldedInstanceConfigResponse
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetwork_uri This property is required. str
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. Sequence[str]
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zone_uri This property is required. str
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
internalIpOnly This property is required. Boolean
Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
metadata This property is required. Map<String>
The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
networkUri This property is required. String
Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
nodeGroupAffinity This property is required. Property Map
Optional. Node Group Affinity for sole-tenant clusters.
privateIpv6GoogleAccess This property is required. String
Optional. The type of IPv6 access for a cluster.
reservationAffinity This property is required. Property Map
Optional. Reservation Affinity for consuming Zonal reservation.
serviceAccount This property is required. String
Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
serviceAccountScopes This property is required. List<String>
Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
shieldedInstanceConfig This property is required. Property Map
Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
subnetworkUri This property is required. String
Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
tags This property is required. List<String>
The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
zoneUri This property is required. String
Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f

GkeClusterConfig
, GkeClusterConfigArgs

NamespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. A target for the deployment.
namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. A target for the deployment.
namespacedGkeDeploymentTarget NamespacedGkeDeploymentTarget
Optional. A target for the deployment.
namespacedGkeDeploymentTarget Property Map
Optional. A target for the deployment.

GkeClusterConfigResponse
, GkeClusterConfigResponseArgs

NamespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespaced_gke_deployment_target This property is required. NamespacedGkeDeploymentTargetResponse
Optional. A target for the deployment.
namespacedGkeDeploymentTarget This property is required. Property Map
Optional. A target for the deployment.

InstanceGroupConfig
, InstanceGroupConfigArgs

Accelerators List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfig>
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfig
Optional. Disk option config settings.
ImageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
MachineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
MinCpuPlatform string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility Pulumi.GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
Accelerators []AcceleratorConfig
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig DiskConfig
Optional. Disk option config settings.
ImageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
MachineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
MinCpuPlatform string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators List<AcceleratorConfig>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig DiskConfig
Optional. Disk option config settings.
imageUri String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
machineTypeUri String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances Integer
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators AcceleratorConfig[]
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig DiskConfig
Optional. Disk option config settings.
imageUri string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
machineTypeUri string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators Sequence[AcceleratorConfig]
Optional. The Compute Engine accelerator configuration for these instances.
disk_config DiskConfig
Optional. Disk option config settings.
image_uri str
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
machine_type_uri str
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
min_cpu_platform str
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
num_instances int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility InstanceGroupConfigPreemptibility
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig Property Map
Optional. Disk option config settings.
imageUri String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
machineTypeUri String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
minCpuPlatform String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances Number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE"
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceGroupConfigPreemptibility
, InstanceGroupConfigPreemptibilityArgs

PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
InstanceGroupConfigPreemptibilityPreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
InstanceGroupConfigPreemptibilityNonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
InstanceGroupConfigPreemptibilityPreemptible
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
PreemptibilityUnspecified
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NonPreemptible
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
Preemptible
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
PREEMPTIBILITY_UNSPECIFIED
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
NON_PREEMPTIBLE
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
PREEMPTIBLE
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
"PREEMPTIBILITY_UNSPECIFIED"
PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
"NON_PREEMPTIBLE"
NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
"PREEMPTIBLE"
PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.

InstanceGroupConfigResponse
, InstanceGroupConfigResponseArgs

Accelerators This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceNames This property is required. List<string>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. List<Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.InstanceReferenceResponse>
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
Accelerators This property is required. []AcceleratorConfigResponse
Optional. The Compute Engine accelerator configuration for these instances.
DiskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
ImageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
InstanceNames This property is required. []string
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
InstanceReferences This property is required. []InstanceReferenceResponse
List of references to Compute Engine instances.
IsPreemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
MachineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
ManagedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
MinCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
NumInstances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
Preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. List<AcceleratorConfigResponse>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<InstanceReferenceResponse>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. Integer
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. AcceleratorConfigResponse[]
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. DiskConfigResponse
Optional. Disk option config settings.
imageUri This property is required. string
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. string[]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. InstanceReferenceResponse[]
List of references to Compute Engine instances.
isPreemptible This property is required. boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. string
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. string
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. string
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. Sequence[AcceleratorConfigResponse]
Optional. The Compute Engine accelerator configuration for these instances.
disk_config This property is required. DiskConfigResponse
Optional. Disk option config settings.
image_uri This property is required. str
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instance_names This property is required. Sequence[str]
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instance_references This property is required. Sequence[InstanceReferenceResponse]
List of references to Compute Engine instances.
is_preemptible This property is required. bool
Specifies that this instance group contains preemptible instances.
machine_type_uri This property is required. str
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managed_group_config This property is required. ManagedGroupConfigResponse
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
min_cpu_platform This property is required. str
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
num_instances This property is required. int
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. str
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
accelerators This property is required. List<Property Map>
Optional. The Compute Engine accelerator configuration for these instances.
diskConfig This property is required. Property Map
Optional. Disk option config settings.
imageUri This property is required. String
Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
instanceNames This property is required. List<String>
The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
instanceReferences This property is required. List<Property Map>
List of references to Compute Engine instances.
isPreemptible This property is required. Boolean
Specifies that this instance group contains preemptible instances.
machineTypeUri This property is required. String
Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
managedGroupConfig This property is required. Property Map
The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
minCpuPlatform This property is required. String
Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
numInstances This property is required. Number
Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
preemptibility This property is required. String
Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.

InstanceReferenceResponse
, InstanceReferenceResponseArgs

InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicKey This property is required. string
The public key used for sharing data with this instance.
InstanceId This property is required. string
The unique identifier of the Compute Engine instance.
InstanceName This property is required. string
The user-friendly name of the Compute Engine instance.
PublicKey This property is required. string
The public key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicKey This property is required. String
The public key used for sharing data with this instance.
instanceId This property is required. string
The unique identifier of the Compute Engine instance.
instanceName This property is required. string
The user-friendly name of the Compute Engine instance.
publicKey This property is required. string
The public key used for sharing data with this instance.
instance_id This property is required. str
The unique identifier of the Compute Engine instance.
instance_name This property is required. str
The user-friendly name of the Compute Engine instance.
public_key This property is required. str
The public key used for sharing data with this instance.
instanceId This property is required. String
The unique identifier of the Compute Engine instance.
instanceName This property is required. String
The user-friendly name of the Compute Engine instance.
publicKey This property is required. String
The public key used for sharing data with this instance.

KerberosConfig
, KerberosConfigArgs

CrossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
CrossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours Integer
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri string
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
cross_realm_trust_admin_server str
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_kdc str
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_realm str
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
cross_realm_trust_shared_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enable_kerberos bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdc_db_key_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
key_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystore_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystore_uri str
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kms_key_uri str
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
root_principal_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgt_lifetime_hours int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststore_password_uri str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststore_uri str
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours Number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

KerberosConfigResponse
, KerberosConfigResponseArgs

CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
CrossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
CrossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
CrossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
EnableKerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
KdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
KeyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
KeystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
KeystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
Realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
RootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
TgtLifetimeHours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
TruststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
TruststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Integer
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. string
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. string
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. string
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. string
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. string
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. string
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. string
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. string
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
cross_realm_trust_admin_server This property is required. str
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_kdc This property is required. str
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
cross_realm_trust_realm This property is required. str
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
cross_realm_trust_shared_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enable_kerberos This property is required. bool
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdc_db_key_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
key_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystore_uri This property is required. str
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kms_key_uri This property is required. str
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. str
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
root_principal_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgt_lifetime_hours This property is required. int
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststore_password_uri This property is required. str
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststore_uri This property is required. str
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
crossRealmTrustAdminServer This property is required. String
Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustKdc This property is required. String
Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
crossRealmTrustRealm This property is required. String
Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
crossRealmTrustSharedPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
enableKerberos This property is required. Boolean
Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
kdcDbKeyUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
keyPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
keystorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
keystoreUri This property is required. String
Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
kmsKeyUri This property is required. String
Optional. The uri of the KMS key used to encrypt various sensitive files.
realm This property is required. String
Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
rootPrincipalPasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
tgtLifetimeHours This property is required. Number
Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
truststorePasswordUri This property is required. String
Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
truststoreUri This property is required. String
Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.

LifecycleConfig
, LifecycleConfigArgs

AutoDeleteTime string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTime string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_time str
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_ttl str
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_delete_ttl str
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).

LifecycleConfigResponse
, LifecycleConfigResponseArgs

AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
AutoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
IdleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. string
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. string
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. string
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. string
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_time This property is required. str
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
auto_delete_ttl This property is required. str
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_delete_ttl This property is required. str
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idle_start_time This property is required. str
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTime This property is required. String
Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
autoDeleteTtl This property is required. String
Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleDeleteTtl This property is required. String
Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
idleStartTime This property is required. String
The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).

ManagedGroupConfigResponse
, ManagedGroupConfigResponseArgs

InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
InstanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
InstanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. string
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. string
The name of the Instance Template used for the Managed Instance Group.
instance_group_manager_name This property is required. str
The name of the Instance Group Manager for this group.
instance_template_name This property is required. str
The name of the Instance Template used for the Managed Instance Group.
instanceGroupManagerName This property is required. String
The name of the Instance Group Manager for this group.
instanceTemplateName This property is required. String
The name of the Instance Template used for the Managed Instance Group.

MetastoreConfig
, MetastoreConfigArgs

DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataproc_metastore_service This property is required. str
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

MetastoreConfigResponse
, MetastoreConfigResponseArgs

DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
DataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. string
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataproc_metastore_service This property is required. str
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
dataprocMetastoreService This property is required. String
Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]

NamespacedGkeDeploymentTarget
, NamespacedGkeDeploymentTargetArgs

ClusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
ClusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace string
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
cluster_namespace str
Optional. A namespace within the GKE cluster to deploy into.
target_gke_cluster str
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NamespacedGkeDeploymentTargetResponse
, NamespacedGkeDeploymentTargetResponseArgs

ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
ClusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
TargetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. string
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. string
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
cluster_namespace This property is required. str
Optional. A namespace within the GKE cluster to deploy into.
target_gke_cluster This property is required. str
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
clusterNamespace This property is required. String
Optional. A namespace within the GKE cluster to deploy into.
targetGkeCluster This property is required. String
Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'

NodeGroupAffinity
, NodeGroupAffinityArgs

NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
node_group_uri This property is required. str
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

NodeGroupAffinityResponse
, NodeGroupAffinityResponseArgs

NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. string
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
node_group_uri This property is required. str
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
nodeGroupUri This property is required. String
The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1

NodeInitializationAction
, NodeInitializationActionArgs

ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. string
Cloud Storage URI of executable file.
executionTimeout string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executable_file This property is required. str
Cloud Storage URI of executable file.
execution_timeout str
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

NodeInitializationActionResponse
, NodeInitializationActionResponseArgs

ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
ExecutableFile This property is required. string
Cloud Storage URI of executable file.
ExecutionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. string
Cloud Storage URI of executable file.
executionTimeout This property is required. string
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executable_file This property is required. str
Cloud Storage URI of executable file.
execution_timeout This property is required. str
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
executableFile This property is required. String
Cloud Storage URI of executable file.
executionTimeout This property is required. String
Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.

ReservationAffinity
, ReservationAffinityArgs

ConsumeReservationType Pulumi.GoogleNative.Dataproc.V1Beta2.ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
Key string
Optional. Corresponds to the label key of reservation resource.
Values List<string>
Optional. Corresponds to the label values of reservation resource.
ConsumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
Key string
Optional. Corresponds to the label key of reservation resource.
Values []string
Optional. Corresponds to the label values of reservation resource.
consumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key String
Optional. Corresponds to the label key of reservation resource.
values List<String>
Optional. Corresponds to the label values of reservation resource.
consumeReservationType ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key string
Optional. Corresponds to the label key of reservation resource.
values string[]
Optional. Corresponds to the label values of reservation resource.
consume_reservation_type ReservationAffinityConsumeReservationType
Optional. Type of reservation to consume
key str
Optional. Corresponds to the label key of reservation resource.
values Sequence[str]
Optional. Corresponds to the label values of reservation resource.
consumeReservationType "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"
Optional. Type of reservation to consume
key String
Optional. Corresponds to the label key of reservation resource.
values List<String>
Optional. Corresponds to the label values of reservation resource.

ReservationAffinityConsumeReservationType
, ReservationAffinityConsumeReservationTypeArgs

TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
ReservationAffinityConsumeReservationTypeTypeUnspecified
TYPE_UNSPECIFIED
ReservationAffinityConsumeReservationTypeNoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
ReservationAffinityConsumeReservationTypeAnyReservation
ANY_RESERVATIONConsume any reservation available.
ReservationAffinityConsumeReservationTypeSpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TypeUnspecified
TYPE_UNSPECIFIED
NoReservation
NO_RESERVATIONDo not consume from any allocated capacity.
AnyReservation
ANY_RESERVATIONConsume any reservation available.
SpecificReservation
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
TYPE_UNSPECIFIED
TYPE_UNSPECIFIED
NO_RESERVATION
NO_RESERVATIONDo not consume from any allocated capacity.
ANY_RESERVATION
ANY_RESERVATIONConsume any reservation available.
SPECIFIC_RESERVATION
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
"TYPE_UNSPECIFIED"
TYPE_UNSPECIFIED
"NO_RESERVATION"
NO_RESERVATIONDo not consume from any allocated capacity.
"ANY_RESERVATION"
ANY_RESERVATIONConsume any reservation available.
"SPECIFIC_RESERVATION"
SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.

ReservationAffinityResponse
, ReservationAffinityResponseArgs

ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. List<string>
Optional. Corresponds to the label values of reservation resource.
ConsumeReservationType This property is required. string
Optional. Type of reservation to consume
Key This property is required. string
Optional. Corresponds to the label key of reservation resource.
Values This property is required. []string
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. string
Optional. Type of reservation to consume
key This property is required. string
Optional. Corresponds to the label key of reservation resource.
values This property is required. string[]
Optional. Corresponds to the label values of reservation resource.
consume_reservation_type This property is required. str
Optional. Type of reservation to consume
key This property is required. str
Optional. Corresponds to the label key of reservation resource.
values This property is required. Sequence[str]
Optional. Corresponds to the label values of reservation resource.
consumeReservationType This property is required. String
Optional. Type of reservation to consume
key This property is required. String
Optional. Corresponds to the label key of reservation resource.
values This property is required. List<String>
Optional. Corresponds to the label values of reservation resource.

SecurityConfig
, SecurityConfigArgs

KerberosConfig KerberosConfig
Optional. Kerberos related configuration.
kerberosConfig KerberosConfig
Optional. Kerberos related configuration.
kerberosConfig KerberosConfig
Optional. Kerberos related configuration.
kerberos_config KerberosConfig
Optional. Kerberos related configuration.
kerberosConfig Property Map
Optional. Kerberos related configuration.

SecurityConfigResponse
, SecurityConfigResponseArgs

KerberosConfig This property is required. Pulumi.GoogleNative.Dataproc.V1Beta2.Inputs.KerberosConfigResponse
Optional. Kerberos related configuration.
KerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberos_config This property is required. KerberosConfigResponse
Optional. Kerberos related configuration.
kerberosConfig This property is required. Property Map
Optional. Kerberos related configuration.

ShieldedInstanceConfig
, ShieldedInstanceConfigArgs

EnableIntegrityMonitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm bool
Optional. Defines whether instances have the vTPM enabled.
EnableIntegrityMonitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm Boolean
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm boolean
Optional. Defines whether instances have the vTPM enabled.
enable_integrity_monitoring bool
Optional. Defines whether instances have integrity monitoring enabled.
enable_secure_boot bool
Optional. Defines whether instances have Secure Boot enabled.
enable_vtpm bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm Boolean
Optional. Defines whether instances have the vTPM enabled.

ShieldedInstanceConfigResponse
, ShieldedInstanceConfigResponseArgs

EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
EnableIntegrityMonitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
EnableSecureBoot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
EnableVtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. boolean
Optional. Defines whether instances have the vTPM enabled.
enable_integrity_monitoring This property is required. bool
Optional. Defines whether instances have integrity monitoring enabled.
enable_secure_boot This property is required. bool
Optional. Defines whether instances have Secure Boot enabled.
enable_vtpm This property is required. bool
Optional. Defines whether instances have the vTPM enabled.
enableIntegrityMonitoring This property is required. Boolean
Optional. Defines whether instances have integrity monitoring enabled.
enableSecureBoot This property is required. Boolean
Optional. Defines whether instances have Secure Boot enabled.
enableVtpm This property is required. Boolean
Optional. Defines whether instances have the vTPM enabled.

SoftwareConfig
, SoftwareConfigArgs

ImageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents List<Pulumi.GoogleNative.Dataproc.V1Beta2.SoftwareConfigOptionalComponentsItem>
The set of optional components to activate on the cluster.
Properties Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ImageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents []SoftwareConfigOptionalComponentsItem
The set of optional components to activate on the cluster.
Properties map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents List<SoftwareConfigOptionalComponentsItem>
The set of optional components to activate on the cluster.
properties Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents SoftwareConfigOptionalComponentsItem[]
The set of optional components to activate on the cluster.
properties {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
image_version str
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optional_components Sequence[SoftwareConfigOptionalComponentsItem]
The set of optional components to activate on the cluster.
properties Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "JUPYTER" | "KERBEROS" | "PRESTO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER">
The set of optional components to activate on the cluster.
properties Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

SoftwareConfigOptionalComponentsItem
, SoftwareConfigOptionalComponentsItemArgs

ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine.
Flink
FLINKFlink
Hbase
HBASEHBase.
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Jupyter
JUPYTERThe Jupyter Notebook.
Kerberos
KERBEROSThe Kerberos security feature.
Presto
PRESTOThe Presto query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
SoftwareConfigOptionalComponentsItemComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
SoftwareConfigOptionalComponentsItemAnaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
SoftwareConfigOptionalComponentsItemDocker
DOCKERDocker
SoftwareConfigOptionalComponentsItemDruid
DRUIDThe Druid query engine.
SoftwareConfigOptionalComponentsItemFlink
FLINKFlink
SoftwareConfigOptionalComponentsItemHbase
HBASEHBase.
SoftwareConfigOptionalComponentsItemHiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
SoftwareConfigOptionalComponentsItemJupyter
JUPYTERThe Jupyter Notebook.
SoftwareConfigOptionalComponentsItemKerberos
KERBEROSThe Kerberos security feature.
SoftwareConfigOptionalComponentsItemPresto
PRESTOThe Presto query engine.
SoftwareConfigOptionalComponentsItemRanger
RANGERThe Ranger service.
SoftwareConfigOptionalComponentsItemSolr
SOLRThe Solr service.
SoftwareConfigOptionalComponentsItemZeppelin
ZEPPELINThe Zeppelin notebook.
SoftwareConfigOptionalComponentsItemZookeeper
ZOOKEEPERThe Zookeeper service.
ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine.
Flink
FLINKFlink
Hbase
HBASEHBase.
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Jupyter
JUPYTERThe Jupyter Notebook.
Kerberos
KERBEROSThe Kerberos security feature.
Presto
PRESTOThe Presto query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
ComponentUnspecified
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
Anaconda
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
Docker
DOCKERDocker
Druid
DRUIDThe Druid query engine.
Flink
FLINKFlink
Hbase
HBASEHBase.
HiveWebhcat
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
Jupyter
JUPYTERThe Jupyter Notebook.
Kerberos
KERBEROSThe Kerberos security feature.
Presto
PRESTOThe Presto query engine.
Ranger
RANGERThe Ranger service.
Solr
SOLRThe Solr service.
Zeppelin
ZEPPELINThe Zeppelin notebook.
Zookeeper
ZOOKEEPERThe Zookeeper service.
COMPONENT_UNSPECIFIED
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
ANACONDA
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
DOCKER
DOCKERDocker
DRUID
DRUIDThe Druid query engine.
FLINK
FLINKFlink
HBASE
HBASEHBase.
HIVE_WEBHCAT
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
JUPYTER
JUPYTERThe Jupyter Notebook.
KERBEROS
KERBEROSThe Kerberos security feature.
PRESTO
PRESTOThe Presto query engine.
RANGER
RANGERThe Ranger service.
SOLR
SOLRThe Solr service.
ZEPPELIN
ZEPPELINThe Zeppelin notebook.
ZOOKEEPER
ZOOKEEPERThe Zookeeper service.
"COMPONENT_UNSPECIFIED"
COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
"ANACONDA"
ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
"DOCKER"
DOCKERDocker
"DRUID"
DRUIDThe Druid query engine.
"FLINK"
FLINKFlink
"HBASE"
HBASEHBase.
"HIVE_WEBHCAT"
HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
"JUPYTER"
JUPYTERThe Jupyter Notebook.
"KERBEROS"
KERBEROSThe Kerberos security feature.
"PRESTO"
PRESTOThe Presto query engine.
"RANGER"
RANGERThe Ranger service.
"SOLR"
SOLRThe Solr service.
"ZEPPELIN"
ZEPPELINThe Zeppelin notebook.
"ZOOKEEPER"
ZOOKEEPERThe Zookeeper service.

SoftwareConfigResponse
, SoftwareConfigResponseArgs

ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. List<string>
The set of optional components to activate on the cluster.
Properties This property is required. Dictionary<string, string>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
ImageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
OptionalComponents This property is required. []string
The set of optional components to activate on the cluster.
Properties This property is required. map[string]string
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
The set of optional components to activate on the cluster.
properties This property is required. Map<String,String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. string
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. string[]
The set of optional components to activate on the cluster.
properties This property is required. {[key: string]: string}
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
image_version This property is required. str
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optional_components This property is required. Sequence[str]
The set of optional components to activate on the cluster.
properties This property is required. Mapping[str, str]
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
imageVersion This property is required. String
Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
optionalComponents This property is required. List<String>
The set of optional components to activate on the cluster.
properties This property is required. Map<String>
Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).

Package Details

Repository
Google Cloud Native pulumi/pulumi-google-native
License
Apache-2.0

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi