Skip to content

Commit 44fdc57

Browse files
authored
Remove old references (apache#290)
1 parent 874d866 commit 44fdc57

File tree

7 files changed

+106
-86
lines changed

7 files changed

+106
-86
lines changed

docs/access-control.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -160,8 +160,8 @@ includes the following users:
160160
create service principals. She can also create catalogs and
161161
namespaces and configure access control for Polaris resources.
162162

163-
- **Bob:** A data engineer who uses Snowpipe Streaming (in Snowflake)
164-
and Apache Spark™ connections to interact with Polaris.
163+
- **Bob:** A data engineer who uses Apache Spark™ to
164+
interact with Polaris.
165165

166166
- Alice has created a service principal for Bob. It has been
167167
granted the Data_engineer principal role, which in turn has been
@@ -175,8 +175,8 @@ includes the following users:
175175
- The Data administrator roles grant full administrative rights to
176176
the Silver zone catalog and Gold zone catalog.
177177

178-
- **Mark:** A data scientist who uses Snowflake AI services to
179-
interact with Polaris.
178+
- **Mark:** A data scientist who uses trains models with data managed
179+
by Polaris.
180180

181181
- Alice has created a service principal for Mark. It has been
182182
granted the Data_scientist principal role, which in turn has

docs/index.html

Lines changed: 31 additions & 29 deletions
Large diffs are not rendered by default.

docs/overview.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -68,10 +68,10 @@ nested namespaces. Iceberg tables belong to namespaces.
6868
In an internal catalog, an Iceberg table is registered in Polaris, but read and written via query engines. The table data and
6969
metadata is stored in your external cloud storage. The table uses Polaris as the Iceberg catalog.
7070

71-
If you have tables that use Snowflake as the Iceberg catalog (Snowflake-managed tables), you can sync these tables to an external
72-
catalog in Polaris. If you sync this catalog to Polaris, it appears as an external catalog in Polaris. The table data and
73-
metadata is stored in your external cloud storage. The Snowflake query engine can read from or write to these tables. However, the other query
74-
engines can only read from these tables.
71+
If you have tables housed in another Iceberg catalog, you can sync these tables to an external catalog in Polaris.
72+
If you sync this catalog to Polaris, it appears as an external catalog in Polaris. Clients connecting to the external
73+
catalog can read from or write to these tables. However, clients connecting to Polaris will only be able to
74+
read from these tables.
7575

7676
> **Important**
7777
>
@@ -156,12 +156,11 @@ In the following example workflow, Bob creates an Apache Iceberg™ table na
156156
service connection with a service principal that has
157157
the privileges to perform these actions.
158158

159-
2. Alice uses Snowflake to read data from Table1.
159+
2. Alice uses Trino to read data from Table1.
160160

161161
Alice can read data from Table1 because she is using a service
162-
connection with a service principal with a catalog integration that
163-
has the privileges to perform this action. Alice
164-
creates an unmanaged table in Snowflake to read data from Table1.
162+
connection with a service principal that has the privileges to
163+
perform this action.
165164

166165
![Diagram that shows an example workflow for Apache Polaris (Incubating)](img/example-workflow.svg "Example workflow for Apache Polaris (Incubating)")
167166

polaris-server.yml

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -73,10 +73,6 @@ featureConfiguration:
7373
- AZURE
7474
- FILE
7575

76-
77-
# Whether we want to enable Snowflake OAuth locally. Setting this to true requires
78-
# that you go through the setup outlined in the `README.md` file, specifically the
79-
# `OAuth + Snowflake: Local Testing And Then Some` section
8076
callContextResolver:
8177
type: default
8278

@@ -162,8 +158,8 @@ logging:
162158

163159
# The file to which statements will be logged.
164160
currentLogFilename: ./logs/polaris.log
165-
# When the log file rolls over, the file will be archived to snowflake-2012-03-15.log.gz,
166-
# snowflake.log will be truncated, and new statements written to it.
161+
# When the log file rolls over, the file will be archived to polaris-2012-03-15.log.gz,
162+
# polaris.log will be truncated, and new statements written to it.
167163
archivedLogFilenamePattern: ./logs/polaris-%d.log.gz
168164
# The maximum number of log files to archive.
169165
archivedFileCount: 14

polaris-service/src/test/resources/polaris-server-integrationtest.yml

Lines changed: 4 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -80,10 +80,6 @@ featureConfiguration:
8080

8181
metaStoreManager:
8282
type: in-memory
83-
# type: remote
84-
# url: http://sdp-devvm-mcollado:8080
85-
# type: eclipse-link # uncomment to use eclipse-link as metastore
86-
# persistence-unit: polaris
8783

8884
io:
8985
factoryType: default
@@ -93,10 +89,6 @@ oauth2:
9389
tokenBroker:
9490
type: symmetric-key
9591
secret: polaris
96-
# type: snowflake
97-
# clientId: ${GS_POLARIS_SERVICE_CLIENT_ID}
98-
# clientSecret: ${GS_POLARIS_SERVICE_CLIENT_SECRET}
99-
# clientSecret2: ${GS_POLARIS_SERVICE_CLIENT_SECRET2}
10092

10193
authenticator:
10294
class: org.apache.polaris.service.auth.DefaultPolarisAuthenticator
@@ -107,25 +99,15 @@ authenticator:
10799

108100
callContextResolver:
109101
type: default
110-
# type: snowflake
111-
# account: ${SNOWFLAKE_ACCOUNT:-SNOWFLAKE}
112-
# scheme: ${GS_SCHEME:-http}
113-
# host: ${GS_HOST:-localhost}
114-
# port: ${GS_PORT:-8080}
115102

116103
realmContextResolver:
117104
type: default
118-
# type: snowflake
119-
# account: ${SNOWFLAKE_ACCOUNT:-SNOWFLAKE}
120-
# scheme: ${GS_SCHEME:-http}
121-
# host: ${GS_HOST:-localhost}
122-
# port: ${GS_PORT:-8080}
123105

124-
defaultRealm: SNOWFLAKE
106+
defaultRealm: POLARIS
125107

126108
cors:
127109
allowed-origins:
128-
- snowflake.com
110+
- localhost
129111

130112
# Logging settings.
131113
logging:
@@ -162,8 +144,8 @@ logging:
162144

163145
# The file to which statements will be logged.
164146
currentLogFilename: ./logs/iceberg-rest.log
165-
# When the log file rolls over, the file will be archived to snowflake-2012-03-15.log.gz,
166-
# snowflake.log will be truncated, and new statements written to it.
147+
# When the log file rolls over, the file will be archived to polaris-2012-03-15.log.gz,
148+
# polaris.log will be truncated, and new statements written to it.
167149
archivedLogFilenamePattern: ./logs/iceberg-rest-%d.log.gz
168150
# The maximum number of log files to archive.
169151
archivedFileCount: 14

regtests/t_pyspark/src/iceberg_spark.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ class IcebergSparkSession:
3535
polaris_url="http://polaris:8181/api/catalog",
3636
catalog_name="catalog_name"
3737
) as spark:
38-
spark.sql(f"USE snowflake.{hybrid_executor.database}.{hybrid_executor.schema}")
38+
spark.sql(f"USE catalog.{hybrid_executor.database}.{hybrid_executor.schema}")
3939
table_list = spark.sql("SHOW TABLES").collect()
4040
"""
4141

spec/index.yaml

Lines changed: 57 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -168,16 +168,16 @@ tags:
168168
as the Iceberg catalog.
169169
170170
171-
If you have tables that use Snowflake as the Iceberg catalog
172-
(Snowflake-managed tables), you can sync these tables to an external
171+
If you have tables housed in another Iceberg catalog, you can sync these
172+
tables to an external catalog in Polaris.
173173
174-
catalog in Polaris. If you sync this catalog to Polaris, it appears as an
175-
external catalog in Polaris. The table data and
174+
If you sync this catalog to Polaris, it appears as an external catalog in
175+
Polaris. Clients connecting to the external
176176
177-
metadata is stored in your external cloud storage. The Snowflake query
178-
engine can read from or write to these tables. However, the other query
177+
catalog can read from or write to these tables. However, clients
178+
connecting to Polaris will only be able to
179179
180-
engines can only read from these tables.
180+
read from these tables.
181181
182182
183183
> **Important**
@@ -348,12 +348,11 @@ tags:
348348
service connection with a service principal that has
349349
the privileges to perform these actions.
350350
351-
2. Alice uses Snowflake to read data from Table1.
351+
2. Alice uses Trino to read data from Table1.
352352
353353
Alice can read data from Table1 because she is using a service
354-
connection with a service principal with a catalog integration that
355-
has the privileges to perform this action. Alice
356-
creates an unmanaged table in Snowflake to read data from Table1.
354+
connection with a service principal that has the privileges to
355+
perform this action.
357356
358357
![Diagram that shows an example workflow for Apache Polaris
359358
(Incubating)](img/example-workflow.svg "Example workflow for Apache
@@ -918,8 +917,8 @@ tags:
918917
create service principals. She can also create catalogs and
919918
namespaces and configure access control for Polaris resources.
920919
921-
- **Bob:** A data engineer who uses Snowpipe Streaming (in Snowflake)
922-
and Apache Spark™ connections to interact with Polaris.
920+
- **Bob:** A data engineer who uses Apache Spark™ to
921+
interact with Polaris.
923922
924923
- Alice has created a service principal for Bob. It has been
925924
granted the Data_engineer principal role, which in turn has been
@@ -933,8 +932,8 @@ tags:
933932
- The Data administrator roles grant full administrative rights to
934933
the Silver zone catalog and Gold zone catalog.
935934
936-
- **Mark:** A data scientist who uses Snowflake AI services to
937-
interact with Polaris.
935+
- **Mark:** A data scientist who uses trains models with data managed
936+
by Polaris.
938937
939938
- Alice has created a service principal for Mark. It has been
940939
granted the Data_scientist principal role, which in turn has
@@ -3106,7 +3105,16 @@ paths:
31063105
summary: Sends a notification to the table
31073106
operationId: sendNotification
31083107
requestBody:
3109-
description: The request containing the notification to be sent
3108+
description: >-
3109+
The request containing the notification to be sent.
3110+
3111+
For each table, Polaris will reject any notification where the
3112+
timestamp in the request body is older than or equal to the most
3113+
recent time Polaris has already processed for the table. The
3114+
responsibility of ensuring the correct order of timestamps for a
3115+
sequence of notifications lies with the caller of the API. This
3116+
includes managing potential clock skew or inconsistencies when
3117+
notifications are sent from multiple sources.
31103118
content:
31113119
application/json:
31123120
schema:
@@ -3136,6 +3144,28 @@ paths:
31363144
TableToLoadDoesNotExist:
31373145
$ref: >-
31383146
#/components/examples/Apache_Iceberg_REST_Catalog_API_NoSuchTableError
3147+
'409':
3148+
description: >-
3149+
Conflict - The timestamp of the received notification is older than
3150+
or equal to the most recent timestamp Polaris has already processed
3151+
for the specified table.
3152+
content:
3153+
application/json:
3154+
schema:
3155+
$ref: >-
3156+
#/components/schemas/Apache_Iceberg_REST_Catalog_API_IcebergErrorResponse
3157+
example:
3158+
summary: >-
3159+
The timestamp of the received notification is older than or
3160+
equal to the most recent timestamp Polaris has already
3161+
processed for the specified table.
3162+
value:
3163+
error:
3164+
message: >-
3165+
A notification with a newer timestamp has been admitted
3166+
for table
3167+
type: AlreadyExistsException
3168+
code: 409
31393169
'419':
31403170
$ref: >-
31413171
#/components/responses/Apache_Iceberg_REST_Catalog_API_AuthenticationTimeoutResponse
@@ -6044,6 +6074,7 @@ components:
60446074
Apache_Iceberg_REST_Catalog_API_NotificationRequest:
60456075
required:
60466076
- notification-type
6077+
- payload
60476078
properties:
60486079
notification-type:
60496080
$ref: >-
@@ -7016,6 +7047,16 @@ components:
70167047
- bar
70177048
updates:
70187049
owner: Raoul
7050+
Apache_Iceberg_REST_Catalog_API_OutOfOrderNotificationError:
7051+
summary: >-
7052+
The timestamp of the received notification is older than or equal to the
7053+
most recent timestamp Polaris has already processed for the specified
7054+
table.
7055+
value:
7056+
error:
7057+
message: A notification with a newer timestamp has been admitted for table
7058+
type: AlreadyExistsException
7059+
code: 409
70197060
x-tagGroups:
70207061
- name: Apache Polaris (Incubating) Documentation
70217062
tags:

0 commit comments

Comments
 (0)