ClickHouse connector#
The ClickHouse connector allows querying tables in an external ClickHouse server. This can be used to query data in the databases on that server, or combine it with other data from different catalogs accessing ClickHouse or any other supported data source.
Requirements#
To connect to a ClickHouse server, you need:
ClickHouse (version 21.8 or higher) or Altinity (version 20.8 or higher).
Network access from the Trino coordinator and workers to the ClickHouse server. Port 8123 is the default port.
Configuration#
The connector can query a ClickHouse server. Create a catalog properties file
that specifies the ClickHouse connector by setting the connector.name
to
clickhouse
.
For example, create the file etc/catalog/example.properties
. Replace the
connection properties as appropriate for your setup:
connector.name=clickhouse
connection-url=jdbc:clickhouse://host1:8123/
connection-user=exampleuser
connection-password=examplepassword
The connection-url
defines the connection information and parameters to pass
to the ClickHouse JDBC driver. The supported parameters for the URL are
available in the ClickHouse JDBC driver configuration.
The connection-user
and connection-password
are typically required and
determine the user credentials for the connection, often a service user. You can
use secrets to avoid actual values in the catalog
properties files.
Connection security#
If you have TLS configured with a globally-trusted certificate installed on your
data source, you can enable TLS between your cluster and the data
source by appending a parameter to the JDBC connection string set in the
connection-url
catalog configuration property.
For example, with version 2.6.4 of the ClickHouse JDBC driver, enable TLS by
appending the ssl=true
parameter to the connection-url
configuration
property:
connection-url=jdbc:clickhouse://host1:8443/?ssl=true
For more information on TLS configuration options, see the Clickhouse JDBC driver documentation
Data source authentication#
The connector can provide credentials for the data source connection in multiple ways:
inline, in the connector configuration file
in a separate properties file
in a key store file
as extra credentials set when connecting to Trino
You can use secrets to avoid storing sensitive values in the catalog properties files.
The following table describes configuration properties for connection credentials:
Property name |
Description |
---|---|
|
Type of the credential provider. Must be one of |
|
Connection user name. |
|
Connection password. |
|
Name of the extra credentials property, whose value to use as the user
name. See |
|
Name of the extra credentials property, whose value to use as the password. |
|
Location of the properties file where credentials are present. It must
contain the |
|
The location of the Java Keystore file, from which to read credentials. |
|
File format of the keystore file, for example |
|
Password for the key store. |
|
Name of the key store entity to use as the user name. |
|
Password for the user name key store entity. |
|
Name of the key store entity to use as the password. |
|
Password for the password key store entity. |
Multiple ClickHouse servers#
If you have multiple ClickHouse servers you need to configure one catalog for each server. To add another catalog:
Add another properties file to
etc/catalog
Save it with a different name that ends in
.properties
For example, if you name the property file sales.properties
, Trino uses the
configured connector to create a catalog named sales
.
General configuration properties#
The following table describes general catalog configuration properties for the connector:
Property name |
Description |
Default value |
---|---|---|
|
Support case insensitive schema and table names. |
|
|
This value should be a duration. |
|
|
Path to a name mapping configuration file in JSON format that allows Trino to disambiguate between schemas and tables with similar names in different cases. |
|
|
Frequency with which Trino checks the name matching configuration file for changes. This value should be a duration. |
(refresh disabled) |
|
The duration for which metadata, including table and column statistics, is cached. |
|
|
Cache the fact that metadata, including table and column statistics, is not available |
|
|
Maximum number of objects stored in the metadata cache |
|
|
Maximum number of statements in a batched execution. Do not change this setting from the default. Non-default values may negatively impact performance. |
|
|
Push down dynamic filters into JDBC queries |
|
|
Maximum duration for which Trino will wait for dynamic filters to be collected from the build side of joins before starting a JDBC query. Using a large timeout can potentially result in more detailed dynamic filters. However, it can also increase latency for some queries. |
|
Domain compaction threshold#
Pushing down a large list of predicates to the data source can compromise
performance. Trino compacts large predicates into a simpler range predicate
by default to ensure a balance between performance and predicate pushdown.
If necessary, the threshold for this compaction can be increased to improve
performance when the data source is capable of taking advantage of large
predicates. Increasing this threshold may improve pushdown of large
dynamic filters.
The domain-compaction-threshold
catalog configuration property or the
domain_compaction_threshold
catalog session property can be used to adjust the default value of
1000
for this threshold.
Procedures#
system.flush_metadata_cache()
Flush JDBC metadata caches. For example, the following system call flushes the metadata caches for all schemas in the
example
catalogUSE example.example_schema; CALL system.flush_metadata_cache();
Case insensitive matching#
When case-insensitive-name-matching
is set to true
, Trino
is able to query non-lowercase schemas and tables by maintaining a mapping of
the lowercase name to the actual name in the remote system. However, if two
schemas and/or tables have names that differ only in case (such as “customers”
and “Customers”) then Trino fails to query them due to ambiguity.
In these cases, use the case-insensitive-name-matching.config-file
catalog
configuration property to specify a configuration file that maps these remote
schemas/tables to their respective Trino schemas/tables:
{
"schemas": [
{
"remoteSchema": "CaseSensitiveName",
"mapping": "case_insensitive_1"
},
{
"remoteSchema": "cASEsENSITIVEnAME",
"mapping": "case_insensitive_2"
}],
"tables": [
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "tablex",
"mapping": "table_1"
},
{
"remoteSchema": "CaseSensitiveName",
"remoteTable": "TABLEX",
"mapping": "table_2"
}]
}
Queries against one of the tables or schemes defined in the mapping
attributes are run against the corresponding remote entity. For example, a query
against tables in the case_insensitive_1
schema is forwarded to the
CaseSensitiveName schema and a query against case_insensitive_2
is forwarded
to the cASEsENSITIVEnAME
schema.
At the table mapping level, a query on case_insensitive_1.table_1
as
configured above is forwarded to CaseSensitiveName.tablex
, and a query on
case_insensitive_1.table_2
is forwarded to CaseSensitiveName.TABLEX
.
By default, when a change is made to the mapping configuration file, Trino must
be restarted to load the changes. Optionally, you can set the
case-insensitive-name-mapping.refresh-period
to have Trino refresh the
properties without requiring a restart:
case-insensitive-name-mapping.refresh-period=30s
Non-transactional INSERT#
The connector supports adding rows using INSERT statements.
By default, data insertion is performed by writing data to a temporary table.
You can skip this step to improve performance and write directly to the target
table. Set the insert.non-transactional-insert.enabled
catalog property
or the corresponding non_transactional_insert
catalog session property to
true
.
Note that with this property enabled, data can be corrupted in rare cases where exceptions occur during the insert operation. With transactions disabled, no rollback can be performed.
Querying ClickHouse#
The ClickHouse connector provides a schema for every ClickHouse database.
Run SHOW SCHEMAS
to see the available ClickHouse databases:
SHOW SCHEMAS FROM example;
If you have a ClickHouse database named web
, run SHOW TABLES
to view the
tables in this database:
SHOW TABLES FROM example.web;
Run DESCRIBE
or SHOW COLUMNS
to list the columns in the clicks
table
in the web
databases:
DESCRIBE example.web.clicks;
SHOW COLUMNS FROM example.web.clicks;
Run SELECT
to access the clicks
table in the web
database:
SELECT * FROM example.web.clicks;
Note
If you used a different name for your catalog properties file, use
that catalog name instead of example
in the above examples.
Table properties#
Table property usage example:
CREATE TABLE default.trino_ck (
id int NOT NULL,
birthday DATE NOT NULL,
name VARCHAR,
age BIGINT,
logdate DATE NOT NULL
)
WITH (
engine = 'MergeTree',
order_by = ARRAY['id', 'birthday'],
partition_by = ARRAY['toYYYYMM(logdate)'],
primary_key = ARRAY['id'],
sample_by = 'id'
);
The following are supported ClickHouse table properties from https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/
Property name |
Default value |
Description |
---|---|---|
|
|
Name and parameters of the engine. |
|
(none) |
Array of columns or expressions to concatenate to create the sorting key. Required if |
|
(none) |
Array of columns or expressions to use as nested partition keys. Optional. |
|
(none) |
Array of columns or expressions to concatenate to create the primary key. Optional. |
|
(none) |
An expression to use for sampling. Optional. |
Currently the connector only supports Log
and MergeTree
table engines
in create table statement. ReplicatedMergeTree
engine is not yet supported.
Type mapping#
Because Trino and ClickHouse each support types that the other does not, this connector modifies some types when reading or writing data. Data types may not map the same way in both directions between Trino and the data source. Refer to the following sections for type mapping in each direction.
ClickHouse type to Trino type mapping#
The connector maps ClickHouse types to the corresponding Trino types according to the following table:
ClickHouse type |
Trino type |
Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Enabling |
|
|
Enabling |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
No other types are supported.
Trino type to ClickHouse type mapping#
The connector maps Trino types to the corresponding ClickHouse types according to the following table:
Trino type |
ClickHouse type |
Notes |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Enabling |
|
|
|
|
|
|
|
|
No other types are supported.
Type mapping configuration properties#
The following properties can be used to configure how data types from the connected data source are mapped to Trino data types and how the metadata is cached in Trino.
Property name |
Description |
Default value |
---|---|---|
|
Configure how unsupported column data types are handled:
The respective catalog session property is |
|
|
Allow forced mapping of comma separated lists of data types to convert to
unbounded |
SQL support#
The connector provides read and write access to data and metadata in a ClickHouse catalog. In addition to the globally available and read operation statements, the connector supports the following features:
ALTER SCHEMA#
The connector supports renaming a schema with the ALTER SCHEMA RENAME
statement. ALTER SCHEMA SET AUTHORIZATION
is not supported.
Performance#
The connector includes a number of performance improvements, detailed in the following sections.
Pushdown#
The connector supports pushdown for a number of operations:
Aggregate pushdown for the following functions:
Note
The connector performs pushdown where performance may be improved, but in order to preserve correctness an operation may not be pushed down. When pushdown of an operation may result in better performance but risks correctness, the connector prioritizes correctness.
Predicate pushdown support#
The connector does not support pushdown of any predicates on columns with
textual types like CHAR
or VARCHAR
.
This ensures correctness of results since the data source may compare strings
case-insensitively.
In the following example, the predicate is not pushed down for either query
since name
is a column of type VARCHAR
:
SELECT * FROM nation WHERE name > 'CANADA';
SELECT * FROM nation WHERE name = 'CANADA';