Starburst Greenplum connector#

The Starburst Greenplum connector allows querying and creating tables in an external Greenplum Database.

The Greenplum Database is a massively parallel implementation of the PostgreSQL database, and shares many of its characteristics.

Requirements#

To connect to Greenplum, you need:

  • Greenplum Database or Tanzu Greenplum version 6.0 or higher.

  • Network access from the coordinator and workers to the Greenplum server. Port 5432 is the default port.

  • A valid Starburst Enterprise license.

Configuration#

Create a catalog properties file in etc/catalog named example.properties to access the configured Greenplum database in the example catalog (replace example with your database name or some other descriptive name of the catalog). Configure the usage of the connector by specifying the name greenplum and replace the connection properties as appropriate for your setup.

connector.name=greenplum
connection-url=jdbc:postgresql://example.net:5432/database
connection-user=root
connection-password=secret

Note that the connection-url uses the syntax from the PostgreSQL JDBC driver used by the Greenplum connector.

General configuration properties#

The following table describes general catalog configuration properties for the connector:

Property name

Description

Default value

case-insensitive-name-matching

Support case insensitive schema and table names.

false

case-insensitive-name-matching.cache-ttl

This value should be a duration.

1m

case-insensitive-name-matching.config-file

Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases.

null

case-insensitive-name-matching.config-file.refresh-period

Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration.

(refresh disabled)

metadata.cache-ttl

The duration for which metadata, including table and column statistics, is cached.

0s (caching disabled)

metadata.cache-missing

Cache the fact that metadata, including table and column statistics, is not available

false

metadata.cache-maximum-size

Maximum number of objects stored in the metadata cache

10000

write.batch-size

Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance.

1000

dynamic-filtering.enabled

Push down dynamic filters into JDBC queries

true

dynamic-filtering.wait-timeout

Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

20s

Multiple databases or master hosts#

The connector can only access a single database managed by a Greenplum system per catalog. Thus, if you have multiple Greenplum databases, or you want to connect to multiple Greenplum master hosts, you must configure multiple catalogs using the connector.

Type mapping#

Because Trino and Greenplum each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.

Greenplum to Trino type mapping#

The connector maps Greenplum types to the corresponding Trino types according to the following table:

Greenplum to Trino type mapping#

Greenplum type

Trino type

Notes

BOOLEAN, BIT(1)

BOOLEAN

SMALLINT, INT2

SMALLINT

INTEGER, INT, INT4, SERIAL, SERIAL4

INTEGER

BIGINT, INT8, BIGSERIAL, SERIAL8

BIGINT

REAL

REAL

DOUBLE PRECISION, FLOAT, FLOAT8

DOUBLE

REAL, FLOAT4

REAL

Special values Infinity, -Infinity, and NaN are supported.

DECIMAL(p, s)

DECIMAL(p, s)

DECIMAL

DOUBLE

MONEY

VARCHAR

Be aware of locale-specific formatting of MONEY set by lc_monetary.

VARCHAR(n), CHARACTER VARYING

VARCHAR(n)

TEXT

VARCHAR (unbounded)

VARBINARY(n)

BYTEA

DATE

DATE

TIME

TIME(3)

TIME WITH TIME ZONE is not supported.

TIMESTAMP

TIMESTAMP(6)

TIMESTAMP WITH TIME ZONE is supported.

UUID

UUID

JSON, JSONB

JSON

No other types are supported.

Trino to Greenplum type mapping#

The connector maps Trino types to the corresponding Greenplum types according to the following table:

Trino to Greenplum type mapping#

Trino type

Greenplum type

Notes

BOOLEAN

BOOLEAN

TINYINT

SMALLINT

Greenplum coerces TINYINT to SMALLINT in passing; thereafter, the written column’s type is SMALLINT.

SMALLINT

SMALLINT

INTEGER

INTEGER

BIGINT

BIGINT

REAL

REAL

DOUBLE

DOUBLE PRECISION

DECIMAL(p, s)

DECIMAL(p, s)

CHAR

CHAR

VARCHAR

VARCHAR

VARBINARY

BYTEA

DATE

DATE

TIME, TIME(3)

TIME

TIMESTAMP(p)

TIMESTAMP(p)

With or without time zone. All precisions supported.

No other types are supported.

SQL support#

The connector provides read and write access to data and metadata in the Greenplum database. In addition to the globally available and read operation statements, the connector supports the following features:

ALTER TABLE RENAME TO#

The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two

The following statement attempts to rename a table across schemas, and therefore is not supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two

ALTER TABLE EXECUTE#

The connector supports the following commands for use with ALTER TABLE EXECUTE:

collect_statistics#

The collect_statistics command is used with Managed statistics to collect statistics for a table and its columns.

The following statement collects statistics for the example_table table and all of its columns:

ALTER TABLE example_table EXECUTE collect_statistics;

Collecting statistics for all columns in a table may be unnecessarily performance-intensive, especially for wide tables. To only collect statistics for a subset of columns, you can include the columns parameter with an array of column names. For example:

ALTER TABLE example_table
    EXECUTE collect_statistics(columns => ARRAY['customer','line_item']);

Decimal type handling#

DECIMAL types with precision larger than 38 can be mapped to a SEP DECIMAL by setting the decimal-mapping property, or the decimal_mapping catalog session property to allow_overflow. The scale of the resulting type is controlled with the decimal-default-scale configuration property, or the decimal-rounding-mode catalog session property. The precision is always 38.

By default, values that require rounding or truncation to fit cause a failure at runtime. This behavior is controlled with the decimal-rounding-mode configuration property or the decimal_rounding_mode session property, which can be set to UNNECESSARY (the default), UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN, or HALF_EVEN. (See RoundingMode.)

Array type handling#

The Greenplum array implementation does not support fixed dimensions, whereas SEP supports only arrays with fixed dimensions. You can configure how the Greenplum connector handles arrays with the greenplum.array-mapping property, or the array_mapping catalog session property. The following values are accepted for this property:

  • DISABLED (default): array columns are skipped

  • AS_ARRAY: array columns are interpreted as the SEP ARRAY type, for array columns with fixed dimensions.

  • AS_JSON: array columns are interpreted as SEP JSON type, with no constraint on dimensions

Type mapping configuration properties#

The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.

Property name

Description

Default value

unsupported-type-handling

Configure how unsupported column data types are handled:

  • IGNORE, column is not accessible.

  • CONVERT_TO_VARCHAR, column is converted to unbounded VARCHAR.

The respective catalog session property is unsupported_type_handling.

IGNORE

jdbc-types-mapped-to-varchar

Allow forced mapping of comma separated lists of data types to convert to unbounded VARCHAR

Performance#

The connector includes a number of performance improvements, detailed in the following sections.

Parallelism#

You can specify the Greenplum database’s concurrency strategy for reading to take advantage of the parallel processing power of Greenplum and SEP.

Greenplum supports two types of concurrency. The default is NO_PARALLELISM, where data is read from Greenplum in a single split. The other type, SEGMENTS, creates multiple splits to read data from Greenplum in parallel, using the gp_segment_id column on the table or materialized view. Splits are processed in parallel on workers.

With SEGMENTS, you specify the maximum number of splits to create per scan. Note that specifying a number larger than the number of segments in Greenplum results in fewer splits than specified. For example, if Greenplum has 16 segments but max-splits-per-scan is set to 20, only 16 splits are created. Ideally, the worker count in SEP is equal to the number of segment servers in Greenplum or larger.

Greenplum parallelism configuration properties#

Property name

Description

Default

greenplum.parallelism-type

Specify either NO_PARALLELISM or SEGMENTS

NO_PARALLELISM

greenplum.parallel.max-splits-per-scan

Specify an integer from 1 to 100

10

For example:

greenplum.parallelism-type=SEGMENTS
greenplum.parallel.max-splits-per-scan=20

If the source stage produced multiple splits, an INSERT operation from SEP issues multiple parallel INSERT queries to the remote data source.

Table statistics#

The Greenplum connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.

The statistics are collected by Greenplum and retrieved by the connector.

To collect statistics for a table, execute the following statement in Greenplum.

ANALYZE table_schema.table_name;

Refer to Greenplum documentation for additional ANALYZE options.

Managed statistics#

The connector supports Managed statistics allowing SEP to collect and store its own table and column statistics that can then be used for performance optimizations in query planning.

Statistics must be collected manually using the built-in collect_statistics command, see collect_statistics for details and examples.

Pushdown#

The connector supports pushdown for a number of operations:

Aggregate pushdown for the following functions:

Cost-based join pushdown#

The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.

When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.

The following table describes catalog configuration properties for join pushdown:

Property name

Description

Default value

join-pushdown.enabled

Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled.

true

join-pushdown.strategy

Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes.

AUTOMATIC

Predicate pushdown support#

The connector does not support pushdown of inequality predicates, such as !=, and range predicates such as >, or BETWEEN, on columns with character string types like CHAR or VARCHAR. Equality predicates, such as IN or =, on columns with character string types are pushed down. This ensures correctness of results since the remote data source may sort strings differently than Trino.

In the following example, the predicate of the first and second query is not pushed down since name is a column of type VARCHAR and > and != are range and inequality predicates respectively. The last query is pushed down.

-- Not pushed down
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name != 'CANADA';
-- Pushed down
SELECT * FROM nation WHERE name = 'CANADA';

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

JDBC connection pooling#

When JDBC connection pooling is enabled, each node creates and maintains a connection pool instead of opening and closing separate connections to the data source. Each connection is available to connect to the data source and retrieve data. After completion of an operation, the connection is returned to the pool and can be reused. This improves performance by a small amount, reduces the load on any required authentication system used for establishing the connection, and helps avoid running into connection limits on data sources.

JDBC connection pooling is disabled by default. You can enable JDBC connection pooling by setting the connection-pool.enabled property to true in your catalog configuration file:

connection-pool.enabled=true

The following catalog configuration properties can be used to tune connection pooling:

JDBC connection pooling catalog configuration properties#

Property name

Description

Default value

connection-pool.enabled

Enable connection pooling for the catalog.

false

connection-pool.max-size

The maximum number of idle and active connections in the pool.

10

connection-pool.max-connection-lifetime

The maximum lifetime of a connection. When a connection reaches this lifetime it is removed, regardless of how recently it has been active.

30m

connection-pool.pool-cache-max-size

The maximum size of the JDBC data source cache.

1000

connection-pool.pool-cache-ttl

The expiration time of a cached data source when it is no longer accessed.

30m

Security#

The connector includes a number of security-related features, detailed in the following sections.

User impersonation#

The connector supports user impersonation. Enable user impersonation in the catalog properties file:

greenplum.impersonation.enabled=true

User impersonation in the Greenplum connector is based on the SET ROLE command supported in PostgreSQL.

Kerberos authentication#

The connector supports Kerberos authentication. Use the following properties in the catalog properties file to configure it.

greenplum.authentication.type=KERBEROS
kerberos.client.principal=example@example.com
kerberos.client.keytab=etc/kerberos/example.keytab
kerberos.config=etc/kerberos/krb5.conf

With this configuration the user example@example.com, defined in the principal property, is used to connect to the database, and the related Kerberos service ticket is located in the example.keytab file.

Kerberos credential pass-through#

The connector can be configured to pass through Kerberos credentials received by SEP to the Greenplum database. Configure Kerberos and SEP, following the instructions in Kerberos credential pass-through.

Next, configure the connector to pass through the credentials from the server to the database in your catalog properties file, and ensure the Kerberos client configuration properties are in place on all nodes.

greenplum.authentication.type=KERBEROS_PASS_THROUGH
http.authentication.krb5.config=/etc/krb5.conf
http-server.authentication.krb5.service-name=exampleServiceName
http-server.authentication.krb5.keytab=/path/to/Keytab/File

Now any database access via SEP is subject to the data access restrictions and permissions of the user supplied via Kerberos.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

greenplum.authentication.type=PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.