PostgreSQL connector#
The PostgreSQL connector allows querying and creating tables in an external PostgreSQL database. This can be used to join data between different systems like PostgreSQL and Hive, or between different PostgreSQL instances.
Requirements#
To connect to PostgreSQL, you need:
PostgreSQL 10.x or higher.
Network access from the Trino coordinator and workers to PostgreSQL. Port 5432 is the default port.
Configuration#
The connector can query a database on a PostgreSQL server. Create a catalog
properties file that specifies the PostgreSQL connector by setting the
connector.name
to postgresql
.
For example, to access a database as the example
catalog, create the file
etc/catalog/example.properties
. Replace the connection properties as
appropriate for your setup:
connector.name=postgresql
connection-url=jdbc:postgresql://example.net:5432/database
connection-user=root
connection-password=secret
The connection-url
defines the connection information and parameters to pass
to the PostgreSQL JDBC driver. The parameters for the URL are available in the
PostgreSQL JDBC driver documentation.
Some parameters can have adverse effects on the connector behavior or not work
with the connector.
The connection-user
and connection-password
are typically required and
determine the user credentials for the connection, often a service user. You can
use secrets to avoid actual values in the catalog
properties files.
Connection security#
If you have TLS configured with a globally-trusted certificate installed on your
data source, you can enable TLS between your cluster and the data
source by appending a parameter to the JDBC connection string set in the
connection-url
catalog configuration property.
For example, with version 42 of the PostgreSQL JDBC driver, enable TLS by
appending the ssl=true
parameter to the connection-url
configuration
property:
connection-url=jdbc:postgresql://example.net:5432/database?ssl=true
For more information on TLS configuration options, see the PostgreSQL JDBC driver documentation.
Data source authentication#
The connector can provide credentials for the data source connection in multiple ways:
inline, in the connector configuration file
in a separate properties file
in a key store file
as extra credentials set when connecting to Trino
You can use secrets to avoid storing sensitive values in the catalog properties files.
The following table describes configuration properties for connection credentials:
Property name |
Description |
---|---|
|
Type of the credential provider. Must be one of |
|
Connection user name. |
|
Connection password. |
|
Name of the extra credentials property, whose value to use as the user
name. See |
|
Name of the extra credentials property, whose value to use as the password. |
|
Location of the properties file where credentials are present. It must
contain the |
|
The location of the Java Keystore file, from which to read credentials. |
|
File format of the keystore file, for example |
|
Password for the key store. |
|
Name of the key store entity to use as the user name. |
|
Password for the user name key store entity. |
|
Name of the key store entity to use as the password. |
|
Password for the password key store entity. |
Multiple PostgreSQL databases or servers#
The PostgreSQL connector can only access a single database within a PostgreSQL server. Thus, if you have multiple PostgreSQL databases, or want to connect to multiple PostgreSQL servers, you must configure multiple instances of the PostgreSQL connector.
To add another catalog, simply add another properties file to etc/catalog
with a different name, making sure it ends in .properties
. For example,
if you name the property file sales.properties
, Trino creates a
catalog named sales
using the configured connector.
General configuration properties#
The following table describes general catalog configuration properties for the connector:
Property name |
Description |
Default value |
---|---|---|
|
Support case insensitive schema and table names. |
|
|
This value should be a duration. |
|
|
Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. |
|
|
Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration. |
(refresh disabled) |
|
The duration for which metadata, including table and column statistics, is cached. |
|
|
Cache the fact that metadata, including table and column statistics, is not available |
|
|
Maximum number of objects stored in the metadata cache |
|
|
Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. |
|
|
Push down dynamic filters into JDBC queries |
|
|
Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. |
|
Domain compaction threshold#
Pushing down a large list of predicates to the data source can compromise
performance. Trino compacts large predicates into a simpler range predicate
by default to ensure a balance between performance and predicate pushdown.
If necessary, the threshold for this compaction can be increased to improve
performance when the data source is capable of taking advantage of large
predicates. Increasing this threshold may improve pushdown of large
dynamic filters.
The domain-compaction-threshold
catalog configuration property or the
domain_compaction_threshold
catalog session property can be used to adjust the default value of
32
for this threshold.
Procedures#
system.flush_metadata_cache()
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
example
catalogUSE example.example_schema; CALL system.flush_metadata_cache();
Case insensitive matching#
When case-insensitive-name-matching
is set to true
, Trino
is able to query non-lowercase schemas and tables by maintaining a mapping of
the lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as “customers”
and “Customers”) then Trino fails to query them due to ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog
configuration property to specify a configuration file that maps these remote
schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example, a query
against tables in the case_insensitive_1
schema is forwarded to the
CaseSensitiveName schema and a query against case_insensitive_2
is forwarded
to the cASEsENSITIVEnAME
schema.
At the table mapping level, a query on case_insensitive_1.table_1
as
configured above is forwarded to CaseSensitiveName.tablex
, and a query on
case_insensitive_1.table_2
is forwarded to CaseSensitiveName.TABLEX
.
By default, when a change is made to the mapping configuration file, Trino must
be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period
to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT#
The connector supports adding rows using INSERT statements.
By default, data insertion is performed by writing data to a temporary table.
You can skip this step to improve performance and write directly to the target
table. Set the insert.non-transactional-insert.enabled
catalog property
or the corresponding non_transactional_insert
catalog session property to
true
.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Type mapping#
Because Trino and PostgreSQL each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.
PostgreSQL type to Trino type mapping#
The connector maps PostgreSQL types to the corresponding Trino types following this table:
PostgreSQL type |
Trino type |
Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Disabled, |
See Array type handling for more information. |
No other types are supported.
Trino type to PostgreSQL type mapping#
The connector maps Trino types to the corresponding PostgreSQL types following this table:
Trino type |
PostgreSQL type |
Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
See Array type handling for more information. |
No other types are supported.
Decimal type handling#
DECIMAL
types with unspecified precision or scale are mapped to a Trino
DECIMAL
with a default precision of 38 and default scale of 0. The scale can
be changed by setting the decimal-mapping
configuration property or the
decimal_mapping
session property to allow_overflow
. The scale of the
resulting type is controlled via the decimal-default-scale
configuration
property or the decimal-rounding-mode
session property. The precision is
always 38.
By default, values that require rounding or truncation to fit will cause a
failure at runtime. This behavior is controlled via the
decimal-rounding-mode
configuration property or the
decimal_rounding_mode
session property, which can be set to UNNECESSARY
(the default), UP
, DOWN
, CEILING
, FLOOR
, HALF_UP
,
HALF_DOWN
, or HALF_EVEN
(see RoundingMode).
Array type handling#
The PostgreSQL array implementation does not support fixed dimensions whereas Trino
support only arrays with fixed dimensions.
You can configure how the PostgreSQL connector handles arrays with the postgresql.array-mapping
configuration property in your catalog file
or the array_mapping
session property.
The following values are accepted for this property:
DISABLED
(default): array columns are skipped.AS_ARRAY
: array columns are interpreted as TrinoARRAY
type, for array columns with fixed dimensions.AS_JSON
: array columns are interpreted as TrinoJSON
type, with no constraint on dimensions.
Type mapping configuration properties#
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name |
Description |
Default value |
---|---|---|
|
Configure how unsupported column data types are handled:
The respective catalog session property is |
|
|
Allow forced mapping of comma separated lists of data types to convert to
unbounded |
Querying PostgreSQL#
The PostgreSQL connector provides a schema for every PostgreSQL schema.
You can see the available PostgreSQL schemas by running SHOW SCHEMAS
:
SHOW SCHEMAS FROM example;
If you have a PostgreSQL schema named web
, you can view the tables
in this schema by running SHOW TABLES
:
SHOW TABLES FROM example.web;
You can see a list of the columns in the clicks
table in the web
database
using either of the following:
DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;
Finally, you can access the clicks
table in the web
schema:
SELECT * FROM example.web.clicks;
If you used a different name for your catalog properties file, use
that catalog name instead of example
in the above examples.
SQL support#
The connector provides read access and write access to data and metadata in PostgreSQL. In addition to the globally available and read operation statements, the connector supports the following features:
SQL DELETE#
If a WHERE
clause is specified, the DELETE
operation only works if the
predicate in the clause can be fully pushed down to the data source.
ALTER TABLE#
The connector does not support renaming tables across multiple schemas. For example, the following statement is supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_one.table_two
The following statement attempts to rename a table across schemas, and therefore is not supported:
ALTER TABLE example.schema_one.table_one RENAME TO example.schema_two.table_two
ALTER SCHEMA#
The connector supports renaming a schema with the ALTER SCHEMA RENAME
statement. ALTER SCHEMA SET AUTHORIZATION
is not supported.
Table functions#
The connector provides specific table functions to access PostgreSQL.
query(varchar) -> table
#
The query
function allows you to query the underlying database directly. It
requires syntax native to PostgreSQL, because the full query is pushed down and
processed in PostgreSQL. This can be useful for accessing native features which
are not available in Trino or for improving query performance in situations
where running a query natively may be faster.
Note
Polymorphic table functions may not preserve the order of the query result.
If the table function contains a query with an ORDER BY
clause, the
function result may not be ordered as expected.
As a simple example, query the example
catalog and select an entire table:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
*
FROM
tpch.nation'
)
);
As a practical example, you can leverage frame exclusion from PostgresQL when using window functions:
SELECT
*
FROM
TABLE(
example.system.query(
query => 'SELECT
*,
array_agg(week) OVER (
ORDER BY
week
ROWS
BETWEEN 2 PRECEDING
AND 2 FOLLOWING
EXCLUDE GROUP
) AS week,
array_agg(week) OVER (
ORDER BY
day
ROWS
BETWEEN 2 PRECEDING
AND 2 FOLLOWING
EXCLUDE GROUP
) AS all
FROM
test.time_data'
)
);
Performance#
The connector includes a number of performance improvements, detailed in the following sections.
Table statistics#
The PostgreSQL connector can use table and column statistics for cost based optimizations, to improve query processing performance based on the actual data in the data source.
The statistics are collected by PostgreSQL and retrieved by the connector.
To collect statistics for a table, execute the following statement in PostgreSQL.
ANALYZE table_schema.table_name;
Refer to PostgreSQL documentation for additional ANALYZE
options.
Pushdown#
The connector supports pushdown for a number of operations:
Aggregate pushdown for the following functions:
Note
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness.
Cost-based join pushdown#
The connector supports cost-based Join pushdown to make intelligent decisions about whether to push down a join operation to the data source.
When cost-based join pushdown is enabled, the connector only pushes down join operations if the available Table statistics suggest that doing so improves performance. Note that if no table statistics are available, join operation pushdown does not occur to avoid a potential decrease in query performance.
The following table describes catalog configuration properties for join pushdown:
Property name |
Description |
Default value |
---|---|---|
|
Enable join pushdown. Equivalent catalog
session property is
|
|
|
Strategy used to evaluate whether join operations are pushed down. Set to
|
|
Predicate pushdown support#
Predicates are pushed down for most types, including UUID
and temporal
types, such as DATE
.
The connector does not support pushdown of range predicates, such as >
,
<
, or BETWEEN
, on columns with character string types like CHAR
or VARCHAR
. Equality predicates, such as
IN
or =
, and inequality predicates, such as !=
on columns with
textual types are pushed down. This ensures correctness of results since the
remote data source may sort strings differently than Trino.
In the following example, the predicate of the first query is not pushed down
since name
is a column of type VARCHAR
and >
is a range predicate.
The other queries are pushed down.
-- Not pushed down
SELECT * FROM nation WHERE name > 'CANADA';
-- Pushed down
SELECT * FROM nation WHERE name != 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';
There is experimental support to enable pushdown of range predicates on columns
with character string types which can be enabled by setting the
postgresql.experimental.enable-string-pushdown-with-collate
catalog
configuration property or the corresponding
enable_string_pushdown_with_collate
session property to true
.
Enabling this configuration will make the predicate of all the queries in the
above example get pushed down.