Starburst Redshift connector#

The Starburst Redshift connector is an improved version of the Trino Redshift connector that allows querying and creating tables in an external Amazon Redshift cluster.

Requirements#

To connect to Redshift, you need:

Configuration#

To configure a Redshift catalog, create a catalog properties file in etc/catalog named, for example, example.properties. The following is a minimal configuration for a Redshift catalog properties file:

connector.name=redshift
connection-url=jdbc:redshift://example.net:5439/database
connection-user=redshift_username
connection-password=redshift_password

The connection-user and connection-password are typically required and determine the user credentials for the connection, often a service user. You can use secrets to avoid using actual values in the catalog properties files.

Connection security#

If you have TLS configured with a globally-trusted certificate installed on your data source, you can enable TLS between your cluster and the data source by appending a parameter to the JDBC connection string set in the connection-url catalog configuration property.

For example, in version 2.1 of the Redshift JDBC driver, TLS/SSL is enabled by default with the SSL parameter. You can disable or further configure TLS by appending parameters to the connection-url configuration property:

connection-url=jdbc:redshift://example.net:5439/database;SSL=TRUE;

For more information on TLS configuration options, see the Redshift JDBC driver documentation.

Data source authentication#

The connector can provide credentials for the data source connection in multiple ways:

  • inline, in the connector configuration file

  • in a separate properties file

  • in a key store file

  • as extra credentials set when connecting to Trino

You can use secrets to avoid storing sensitive values in the catalog properties files.

The following table describes configuration properties for connection credentials:

Property name

Description

credential-provider.type

Type of the credential provider. Must be one of INLINE, FILE, or KEYSTORE; defaults to INLINE.

connection-user

Connection user name.

connection-password

Connection password.

user-credential-name

Name of the extra credentials property, whose value to use as the user name. See extraCredentials in Parameter reference.

password-credential-name

Name of the extra credentials property, whose value to use as the password.

connection-credential-file

Location of the properties file where credentials are present. It must contain the connection-user and connection-password properties.

keystore-file-path

The location of the Java Keystore file, from which to read credentials.

keystore-type

File format of the keystore file, for example JKS or PEM.

keystore-password

Password for the key store.

keystore-user-credential-name

Name of the key store entity to use as the user name.

keystore-user-credential-password

Password for the user name key store entity.

keystore-password-credential-name

Name of the key store entity to use as the password.

keystore-password-credential-password

Password for the password key store entity.

Multiple Redshift databases or clusters#

By default, the Redshift connector can only access a single database within a Redshift cluster. Enable the redshift.database-prefix-for-schema.enabled catalog configuration property to access multiple databases on a Redshift cluster as described in the following table:

Starburst Redshift connector configuration properties#

Property name

Description

Default

redshift.database-prefix-for-schema.enabled

Allow access to other databases in Redshift by including the database name in double quotes with the schema name:

SELECT *
FROM catalog."database.schema".table

When enabled, "database.schema", including the double quotes, is required at all times as part of the fully-qualified name. Enabling this feature also disables write operations, making it so the catalog only supports globally available and read operation SQL statements.

false

To connect to multiple Redshift clusters, you must configure additional catalogs using the Redshift connector for each cluster.

General configuration properties#

The following table describes general catalog configuration properties for the connector:

Property name

Description

Default value

case-insensitive-name-matching

Support case insensitive schema and table names.

false

case-insensitive-name-matching.cache-ttl

This value should be a duration.

1m

case-insensitive-name-matching.config-file

Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases.

null

case-insensitive-name-matching.config-file.refresh-period

Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration.

(refresh disabled)

metadata.cache-ttl

The duration for which metadata, including table and column statistics, is cached.

0s (caching disabled)

metadata.cache-missing

Cache the fact that metadata, including table and column statistics, is not available

false

metadata.cache-maximum-size

Maximum number of objects stored in the metadata cache

10000

write.batch-size

Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance.

1000

dynamic-filtering.enabled

Push down dynamic filters into JDBC queries

true

dynamic-filtering.wait-timeout

Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

20s

Domain compaction threshold#

Pushing down a large list of predicates to the data source can compromise performance. Trino compacts large predicates into a simpler range predicate by default to ensure a balance between performance and predicate pushdown. If necessary, the threshold for this compaction can be increased to improve performance when the data source is capable of taking advantage of large predicates. Increasing this threshold may improve pushdown of large dynamic filters. The domain-compaction-threshold catalog configuration property or the domain_compaction_threshold catalog session property can be used to adjust the default value of 32 for this threshold.

Procedures#

  • system.flush_metadata_cache()

    Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the example catalog

    USE example.example_schema;
    CALL system.flush_metadata_cache();
    

Case insensitive matching#

When case-insensitive-name-matching is set to true, Trino is able to query non-lowercase schemas and tables by maintaining a mapping of the lowercase name to the actual name in the remote system. However, if two schemas and/or tables have names that differ only in case (such as “customers” and “Customers”) then Trino fails to query them due to ambiguity.

In these cases, use the case-insensitive-name-matching.config-file catalog configuration property to specify a configuration file that maps these remote schemas/tables to their respective Trino schemas/tables:

{
  "schemas": [
    {
      "remoteSchema": "CaseSensitiveName",
      "mapping": "case_insensitive_1"
    },
    {
      "remoteSchema": "cASEsENSITIVEnAME",
      "mapping": "case_insensitive_2"
    }],
  "tables": [
    {
      "remoteSchema": "CaseSensitiveName",
      "remoteTable": "tablex",
      "mapping": "table_1"
    },
    {
      "remoteSchema": "CaseSensitiveName",
      "remoteTable": "TABLEX",
      "mapping": "table_2"
    }]
}

Queries against one of the tables or schemes defined in the mapping attributes are run against the corresponding remote entity. For example, a query against tables in the case_insensitive_1 schema is forwarded to the CaseSensitiveName schema and a query against case_insensitive_2 is forwarded to the cASEsENSITIVEnAME schema.

At the table mapping level, a query on case_insensitive_1.table_1 as configured above is forwarded to CaseSensitiveName.tablex, and a query on case_insensitive_1.table_2 is forwarded to CaseSensitiveName.TABLEX.

By default, when a change is made to the mapping configuration file, Trino must be restarted to load the changes. Optionally, you can set the case-insensitive-name-mapping.refresh-period to have Trino refresh the properties without requiring a restart:

case-insensitive-name-mapping.refresh-period=30s

Non-transactional INSERT#

The connector supports adding rows using INSERT statements. By default, data insertion is performed by writing data to a temporary table. You can skip this step to improve performance and write directly to the target table. Set the insert.non-transactional-insert.enabled catalog property or the corresponding non_transactional_insert catalog session property to true.

Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.

Querying Redshift#

The Redshift connector provides a schema for every Redshift schema. See the available Redshift schemas by running SHOW SCHEMAS. The following example shows the Redshift schemas available in a catalog named example:

SHOW SCHEMAS FROM example;

If you have a Redshift schema named web, view the tables in this schema by running SHOW TABLES:

SHOW TABLES FROM example.web;

See a list of the columns in the clicks table in the web database using either DESCRIBE or SHOW COLUMNS:

DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;

Finally, access the clicks table in the web schema:

SELECT * FROM example.web.clicks;

If you used a different name for your catalog properties file, use that catalog name instead of example in the above examples.

Type mapping#

Because SEP and Redshift each support types that the other does not, this connector modifies some types when reading or writing data.

Redshift to SEP type mapping#

This connector supports reading the following Redshift types and performs conversion to SEP types with the detailed mappings as shown in the following table.

Redshift to SEP type mapping#

Redshift database type

SEP type

Notes

BOOLEAN

BOOLEAN

SMALLINT, INT2

SMALLINT

INTEGER, INT, INT4

INTEGER

BIGINT, INT8

BIGINT

DOUBLE PRECISION, FLOAT, FLOAT8

DOUBLE

REAL, FLOAT4

REAL

DECIMAL(p, s), NUMERIC(p,s)

DECIMAL(p, s)

CHAR(n), NCHAR(n), BPCHAR

CHAR(n)

Redshift’s BPCHAR is equivalent to CHAR(256).

VARCHAR(n), NVARCHAR(n), TEXT

VARCHAR(n)

Redshift’s TEXT is equivalent to VARCHAR(256).

DATE

DATE

TIME

TIME(6)

See Mapping datetime types

TIMESTAMP

TIMESTAMP(6)

See Mapping datetime types

No other types are supported.

SEP to Redshift type mapping#

This connector supports writing the following SEP types and performs conversion to Redshift types with the detailed mappings as shown in the following table.

SEP to Redshift type mapping#

SEP type

Redshift type

Notes

BOOLEAN

BOOLEAN

TINYINT

SMALLINT

SMALLINT

SMALLINT

INTEGER

INTEGER

BIGINT

BIGINT

REAL

REAL

DOUBLE

DOUBLE PRECISION

DECIMAL(p, s)

DECIMAL(p, s)

CHAR(n)

CHAR(n)

For n up to 4096.

CHAR(n)

VARCHAR(n)

For n from 4096 to 65535.

CHAR(n)

VARCHAR(MAX)

For n above 65535.

VARCHAR(n)

VARCHAR(n)

For n up to 65535.

VARCHAR(n)

VARCHAR(MAX)

For n above 65535.

VARCHAR

VARCHAR(MAX)

When no bound is given.

DATE

DATE

TIME(p)

TIME

See Mapping datetime types

TIMESTAMP(p)

TIMESTAMP

See Mapping datetime types

No other types are supported.

Mapping datetime types#

Redshift’s TIME and TIMESTAMP types only support microsecond precision (6 digits). When writing data with higher precision from SEP to Redshift, the time is rounded to the nearest microsecond before being inserted.

SQL support#

The connector provides read and write access to data and metadata in Redshift. In addition to the globally available and read operation statements, the connector supports the following features:

When the redshift.database-prefix-for-schema.enabled catalog configuration property is enabled, the connector only supports globally available and read operation SQL statements.

SQL DELETE#

If a WHERE clause is specified, the DELETE operation only works if the predicate in the clause can be fully pushed down to the data source.

ALTER TABLE#

The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two

The following statement attempts to rename a table across schemas, and therefore is not supported:

ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two

ALTER SCHEMA#

The connector supports renaming a schema with the ALTER SCHEMA RENAME statement. ALTER SCHEMA SET AUTHORIZATION is not supported.

ALTER TABLE EXECUTE#

The connector supports the following commands for use with ALTER TABLE EXECUTE:

collect_statistics#

The collect_statistics command is used with Managed statistics to collect statistics for a table and its columns.

The following statement collects statistics for the example_table table and all of its columns:

ALTER TABLE example_table EXECUTE collect_statistics;

Collecting statistics for all columns in a table may be unnecessarily performance-intensive, especially for wide tables. To only collect statistics for a subset of columns, you can include the columns parameter with an array of column names. For example:

ALTER TABLE example_table
    EXECUTE collect_statistics(columns => ARRAY['customer','line_item']);

Table functions#

The connector provides specific table functions to access Redshift.

query(varchar) -> table#

The query function allows you to query the underlying database directly. It requires syntax native to Redshift, because the full query is pushed down and processed in Redshift. This can be useful for accessing native features which are not implemented in Trino, or for improving query performance in situations where running a query natively may be faster.

Note

Polymorphic table functions may not preserve the order of the query result. If the table function contains a query with an ORDER BY clause, the function result may not be ordered as expected.

For example, select the top 10 nations by population:

SELECT
  *
FROM
  TABLE(
    example.system.query(
      query => 'SELECT
        TOP 10 *
      FROM
        tpch.nation
      ORDER BY
        population DESC'
    )
  );

Performance#

The connector includes a number of performance improvements, detailed in the following sections.

Pushdown#

The connector supports pushdown for a number of operations:

Aggregate pushdown for the following functions:

Cost-based join pushdown#

The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.

When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.

The following table describes catalog configuration properties for join pushdown:

Property name

Description

Default value

join-pushdown.enabled

Enable join pushdown. Equivalent catalog session property is join_pushdown_enabled.

true

join-pushdown.strategy

Strategy used to evaluate whether join operations are pushed down. Set to AUTOMATIC to enable cost-based join pushdown, or EAGER to push down joins whenever possible. Note that EAGER can push down joins even when table statistics are unavailable, which may result in degraded query performance. Because of this, EAGER is only recommended for testing and troubleshooting purposes.

AUTOMATIC

Table statistics#

The connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.

The statistics are collected by Redshift and retrieved by the connector.

ANALYZE may be run automatically depending on your Redshift configuration. To manually collect statistics for a table, execute the following statement in Redshift.

ANALYZE table_schema.table_name;

Refer to Redshift documentation for additional ANALYZE options.

Managed statistics#

The connector supports Managed statistics allowing SEP to collect and store its own table and column statistics that can then be used for performance optimizations in query planning.

Statistics must be collected manually using the built-in collect_statistics command, see collect_statistics for details and examples.

Dynamic filtering#

Dynamic filtering is enabled by default. It causes the connector to wait for dynamic filtering to complete before starting a JDBC query.

You can disable dynamic filtering by setting the dynamic-filtering.enabled property in your catalog configuration file to false.

Wait timeout#

By default, table scans on the connector are delayed up to 20 seconds until dynamic filters are collected from the build side of joins. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries.

You can configure the dynamic-filtering.wait-timeout property in your catalog properties file:

dynamic-filtering.wait-timeout=1m

You can use the dynamic_filtering_wait_timeout catalog session property in a specific session:

SET SESSION example.dynamic_filtering_wait_timeout = 1s;

Compaction#

The maximum size of dynamic filter predicate, that is pushed down to the connector during table scan for a column, is configured using the domain-compaction-threshold property in the catalog properties file:

domain-compaction-threshold=100

You can use the domain_compaction_threshold catalog session property:

SET SESSION domain_compaction_threshold = 10;

By default, domain-compaction-threshold is set to 32. When the dynamic predicate for a column exceeds this threshold, it is compacted into a single range predicate.

For example, if the dynamic filter collected for a date column dt on the fact table selects more than 32 days, the filtering condition is simplified from dt IN ('2020-01-10', '2020-01-12',..., '2020-05-30') to dt BETWEEN '2020-01-10' AND '2020-05-30'. Using a large threshold can result in increased table scan overhead due to a large IN list getting pushed down to the data source.

Metrics#

Metrics about dynamic filtering are reported in a JMX table for each catalog:

jmx.current."io.trino.plugin.jdbc:name=example,type=dynamicfilteringstats"

Metrics include information about the total number of dynamic filters, the number of completed dynamic filters, the number of available dynamic filters and the time spent waiting for dynamic filters.

Starburst Cached Views#

The connector supports table scan redirection to improve performance and reduce load on the data source.

Security#

The connector includes a number of security-related features, detailed in the following sections.

User impersonation#

The connector supports user impersonation. Enable user impersonation in the catalog properties file:

redshift.impersonation.enabled=true

User impersonation in the Redshift connector is based on SET SESSION AUTHORIZATION command supported in Redshift.

Note

Running SET SESSION AUTHORIZATION in Redshift requires the initial connection user to be a superuser.

Password credential pass-through#

The connector supports password credential pass-through. To enable it, edit the catalog properties file to include the authentication type:

redshift.authentication.type=PASSWORD_PASS_THROUGH

For more information about configurations and limitations, see Password credential pass-through.

AWS IAM authentication#

The connector supports IAM authentication. This enhancement allows you to manage access control from SEP with IAM policies.

Configuration#

To enable IAM authentication, add the following configuration properties to the catalog configuration file:

redshift.authentication.type=AWS
aws.region-name=<AWS region>

This table describes the configuration properties for IAM authentication:

IAM configuration properties#

Property name

Description

aws.region-name

The name of the AWS region in which the Redshift instance is deployed.

aws.access-key

The access key of the principal to authenticate with for the token generator service. Used for fixed authentication, setting this property disables automatic authentication.

aws.secret-key

The secret key of the principal to authenticate with for the token generator service. Used for fixed authentication, setting this property disables automatic authentication.

aws.session-token

(Optional) A session token for temporary credentials, such as credentials obtained from SSO. Used for fixed authentication, setting this property disables automatic authentication.

Authentication#

By default the connector attempts to automatically obtain its authentication credentials from the environment. The default credential provider chain attempts to obtain credentials from the following sources, in order:

  1. Environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, or AWS_ACCESS_KEY and AWS_SECRET_KEY.

  2. Java system properties: aws.accessKeyId and aws.secretKey.

  3. Web identity token: credentials from the environment or container.

  4. Credential profiles file: a profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI.

  5. EC2 service credentials: credentials delivered through the Amazon EC2 container service, assuming the security manager has permission to access the value of the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable.

  6. Instance profile credentials: credentials delivered through the Amazon EC2 metadata service.

If the SEP cluster is running on an EC2 instance, these credentials most likely come from the metadata service.

Alternatively, you can set fixed credentials for authentication. This option disables the container’s automatic attempt to locate credentials. To use fixed credentials for authentication, set the following configuration properties:

aws.access-key=<access_key>
aws.secret-key=<secret_key>

# (Optional) You can use temporary credentials. For example, you can use temporary credentials from SSO
aws.session-token=<session_token>

Limitations#

  • The Starburst Redshift connector does not push down queries with a GROUP BY and WHERE clause on the same column for tables using ALL or AUTO(ALL) distribution styles due to a limitation in Redshift. You can work around this by changing the table to use an EVEN or KEY distribution style as described in the Redshift documentation about distribution styles.