An Architect entered the following commands in sequence:

USER1 cannot find the table.
Which of the following commands does the Architect need to run for USER1 to find the tables using the Principle of Least Privilege? (Choose two.)
GRANT ROLE PUBLIC TO ROLE INTERN;
GRANT USAGE ON DATABASE SANDBOX TO ROLE INTERN;
GRANT USAGE ON SCHEMA SANDBOX.PUBLIC TO ROLE INTERN;
GRANT OWNERSHIP ON DATABASE SANDBOX TO USER INTERN;
GRANT ALL PRIVILEGES ON DATABASE SANDBOX TO ROLE INTERN;
According to the Principle of Least Privilege, the Architect should grant the minimum privileges necessary for the USER1 to find the tables in the SANDBOX database.
The USER1 needs to have USAGE privilege on the SANDBOX database and the SANDBOX.PUBLIC schema to be able to access the tables in the PUBLIC schema. Therefore, the commands B and C are the correct ones to run.
The command A is not correct because the PUBLIC role is automatically granted to every user and role in the account, and it does not have any privileges on the SANDBOX database by default.
The command D is not correct because it would transfer the ownership of the SANDBOX database from the Architect to the USER1, which is not necessary and violates the Principle of Least Privilege.
The command E is not correct because it would grant all the possible privileges on the SANDBOX database to the USER1, which is also not necessary and violates the Principle of Least Privilege.
Snowflake - Principle of Least Privilege : Snowflake - Access Control Privileges : Snowflake - Public Role : Snowflake - Ownership and Grants
A company’s table, employees, was accidentally replaced with a new version.

How can the original table be recovered with the LEAST operational overhead?
Use Time Travel to recover the data using this command:
SELECT *
FROM employees
BEFORE (STATEMENT => '01a5c8b3-0601-ad2b-0067-a503000a1312');
Use Time Travel with a timestamp to recover the data using this command:
SELECT *
FROM employees
AT (TIMESTAMP => '2022-07-22 16:35:00.000 -0700'::TIMESTAMP_TZ);
Revert to the original employees table using this command:
UNDROP TABLE employees;
Rename the new employees table and undrop the original table using these commands:
ALTER TABLE employees RENAME TO employees_bad;
UNDROP TABLE employees;
This scenario tests understanding of Snowflake Time Travel and operational efficiency, both key topics in the SnowPro Architect exam. When a table is replaced using CREATE OR REPLACE TABLE, Snowflake treats the original table version as dropped but still recoverable within the Time Travel retention period. The goal is to recover the original data with the least effort and risk.
Option A is the most efficient solution. Using Time Travel with the BEFORE (STATEMENT => …) clause allows the Architect to query the exact state of the table immediately before the replacement occurred. This approach precisely targets the moment before the destructive operation, avoids guessing timestamps, and minimizes operational steps. It is also highly accurate, which is critical in production recovery scenarios.
Option B relies on a timestamp, which introduces risk if the timestamp is incorrect or ambiguous. Option C is invalid because UNDROP TABLE only works when a table has been explicitly dropped, not replaced. Option D introduces unnecessary steps and operational overhead, increasing the risk of mistakes.
A healthcare company wants to share data with a medical institute. The institute is running a Standard edition of Snowflake; the healthcare company is running a Business Critical edition.
How can this data be shared?
The healthcare company will need to change the institute’s Snowflake edition in the accounts panel.
By default, sharing is supported from a Business Critical Snowflake edition to a Standard edition.
Contact Snowflake and they will execute the share request for the healthcare company.
Set the share_restriction parameter on the shared object to false.
By default, Snowflake does not allow sharing data from a Business Critical edition to a non-Business Critical edition. This is because Business Critical edition provides enhanced security and data protection features that are not available in lower editions. However, this restriction can be overridden by setting the share_restriction parameter on the shared object (database, schema, or table) to false. This parameter allows the data provider to explicitly allow sharing data with lower edition accounts. Note that this parameter can only be set by the data provider, not the data consumer. Also, setting this parameter to false may reduce the level of security and data protection for the shared data.
Enable Data Share:Business Critical Account to Lower Edition
Sharing Is Not Allowed From An Account on BUSINESS CRITICAL Edition to an Account On A Lower Edition
SQL Execution Error: Sharing is Not Allowed from an Account on BUSINESS CRITICAL Edition to an Account on a Lower Edition
Snowflake Editions | Snowflake Documentation
Which feature provides the capability to define an alternate cluster key for a table with an existing cluster key?
External table
Materialized view
Search optimization
Result cache
A materialized view is a feature that provides the capability to define an alternate cluster key for a table with an existing cluster key. A materialized view is a pre-computed result set that is stored in Snowflake and can be queried like a regular table. A materialized view can have a different cluster key than the base table, which can improve the performance and efficiency of queries on the materialized view. A materialized view can also support aggregations, joins, and filters on the base table data. A materialized view is automatically refreshed when the underlying data in the base table changes, as long as the AUTO_REFRESH parameter is set to true1.
Materialized Views | Snowflake Documentation
What are characteristics of the use of transactions in Snowflake? (Select TWO).
Explicit transactions can contain DDL, DML, and query statements.
The autocommit setting can be changed inside a stored procedure.
A transaction can be started explicitly by executing a BEGIN WORK statement and ended explicitly by executing a COMMIT WORK statement.
A transaction can be started explicitly by executing a BEGIN TRANSACTION statement and ended explicitly by executing an END TRANSACTION statement.
Explicit transactions should contain only DML statements and query statements. All DDL statements implicitly commit active transactions.
Comprehensive and Detailed Explanation From Exact Extract:
Snowflake supports both implicit and explicit transactions. However, only specific statement types are allowed within transactions.
Option C:
This is correct. In Snowflake, transactions can be started with any of the following: BEGIN, BEGIN WORK, or START TRANSACTION. Transactions can be ended using COMMIT, COMMIT WORK, or ROLLBACK.
Official Extract:
"You can explicitly start a transaction using the BEGIN, BEGIN WORK, or START TRANSACTION statements and end it using the COMMIT, COMMIT WORK, or ROLLBACK statements."
Source:Snowflake SQL Transactions
Option E:
This is correct. Transactions should only include DML statements (INSERT, UPDATE, DELETE, MERGE) and queries. DDL statements (CREATE, ALTER, DROP) automatically commit and cannot be part of an explicit transaction block.
Official Extract:
"A transaction can contain only DML statements and queries. Any DDL statement implicitly commits the current transaction."
Source:Snowflake SQL Transactions
Option A:
Incorrect. DDL statements are not allowed inside explicit transactions. If used, they trigger an implicit commit.
Option B:
Incorrect. The autocommit setting cannot be modified within a stored procedure. Autocommit is session-level and not dynamically changeable within procedural logic.
Option D:
Incorrect. Snowflake does not support END TRANSACTION as a valid SQL command. The correct ending statement for a transaction is COMMIT or ROLLBACK.
Which SQL ALTER command will MAXIMIZE memory and compute resources for a Snowpark stored procedure when executed on the snowpark_opt_wh warehouse?
ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 1;
ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 2;
ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 8;
ALTER WAREHOUSE snowpark_opt_wh SET MAX_CONCURRENCY_LEVEL = 16;
Snowpark workloads are often memory- and compute-intensive, especially when executing complex transformations, large joins, or machine learning logic inside stored procedures. In Snowflake, the MAX_CONCURRENCY_LEVEL warehouse parameter controls how many concurrent queries can run on a single cluster of a virtual warehouse. Lowering concurrency increases the amount of compute and memory available to each individual query.
Setting MAX_CONCURRENCY_LEVEL = 1 ensures that only one query can execute at a time on the warehouse cluster, allowing that query to consume the maximum possible share of CPU, memory, and I/O resources. This is the recommended configuration when the goal is to optimize performance for a single Snowpark job rather than maximizing throughput for many users. Higher concurrency levels would divide resources across multiple queries, reducing per-query performance and potentially causing spilling to remote storage.
For SnowPro Architect candidates, this question reinforces an important cost and performance tradeoff: concurrency tuning is a powerful lever. When running batch-oriented or compute-heavy Snowpark workloads, architects should favor lower concurrency to maximize per-query resources, even if that means fewer concurrent workloads.
=========
QUESTION NO: 12 [Cost Control and Resource Management]
An Architect executes the following query:
SELECT query_hash,
COUNT(*) AS query_count,
SUM(QH.EXECUTION_TIME) AS total_execution_time,
SUM((QH.EXECUTION_TIME / (1000 * 60 * 60)) * 8) AS c
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY QH
WHERE warehouse_name = 'WH_L'
AND DATE_TRUNC('day', start_time) >= CURRENT_DATE() - 3
GROUP BY query_hash
ORDER BY c DESC
LIMIT 10;
What information does this query provide? (Select TWO).
A. It shows the total execution time and credit estimates for the 10 most expensive individual queries executed on WH_L over the last 3 days.
B. It shows the total execution time and credit estimates for the 10 most expensive query groups (identical or similar queries) executed on WH_L over the last 3 days.
C. It shows the total execution time and credit estimates for the 10 most frequently run query groups executed on WH_L over the last 3 days.
D. It calculates relative cost by converting execution time to minutes and multiplying by credits used.
E. It calculates relative cost by converting execution time to hours and multiplying by credits used.
Answer: B, E
This query groups results by QUERY_HASH, which represents logically identical SQL statements. As a result, the aggregation is performed at the query group level, not at the individual execution level. This allows architects to identify patterns where the same query (or same logical SQL) repeatedly consumes a large amount of compute (Answer B).
The cost calculation converts execution time from milliseconds to hours by dividing by (1000 * 60 * 60) and then multiplies the result by 8, which represents the hourly credit consumption of the WH_L warehouse size. This provides a relative estimate of credit usage per query group, not an exact billing value but a useful approximation for cost analysis (Answer E).
The query does not identify the most frequently executed queries; although COUNT(*) is included, the ordering is done by calculated cost (c), not by frequency. This type of analysis is directly aligned with SnowPro Architect responsibilities, helping architects optimize workloads, refactor expensive query patterns, and right-size warehouses to control costs.
=========
QUESTION NO: 13 [Architecting Snowflake Solutions]
An Architect is designing a disaster recovery plan for a global fraud reporting system. The plan must support near real-time systems using Snowflake data, operate near regional centers with fully redundant failover, and must not be publicly accessible.
Which steps must the Architect take? (Select THREE).
A. Create multiple replicating Snowflake Standard edition accounts.
B. Establish one Snowflake account using a Business Critical edition or higher.
C. Establish multiple Snowflake accounts in each required region with independent data sets.
D. Set up Secure Data Sharing among all Snowflake accounts in the organization.
E. Create a Snowflake connection object.
F. Create a failover group for the fraud data for each regional account.
Answer: B, C, F
Mission-critical, near real-time systems with strict availability and security requirements require advanced Snowflake features. Business Critical edition (or higher) is required to support failover groups and cross-region replication with higher SLA guarantees and compliance capabilities (Answer B). To meet regional proximity and redundancy requirements, multiple Snowflake accounts must be deployed in each required region, ensuring independence and isolation between regional environments (Answer C).
Failover groups are the core Snowflake mechanism for disaster recovery. They replicate selected databases, schemas, and roles across accounts and allow controlled promotion of secondary accounts to primary during failover events (Answer F). Secure Data Sharing alone does not provide DR or replication, and connection objects are unrelated to availability or redundancy.
This design aligns with SnowPro Architect best practices for multi-region disaster recovery, enabling low-latency regional access, controlled failover, and strong isolation without exposing systems to the public internet.
=========
QUESTION NO: 14 [Snowflake Data Engineering]
What transformations are supported in the following SQL statement? (Select THREE).
CREATE PIPE … AS
COPY INTO …
FROM ( … )
A. Data can be filtered by an optional WHERE clause.
B. Columns can be reordered.
C. Columns can be omitted.
D. Type casts are supported.
E. Incoming data can be joined with other tables.
F. The ON_ERROR = ABORT_STATEMENT command can be used.
Answer: A, B, D
Snowflake’s COPY INTO statement (including when used with Snowpipe) supports a limited but useful set of transformations. Data can be filtered using a WHERE clause when loading from a staged SELECT statement, enabling simple row-level filtering (Answer A). Columns can also be reordered by explicitly selecting fields in a different order than they appear in the source (Answer B). Additionally, type casting is supported, allowing raw data to be cast into target column data types during ingestion (Answer D).
However, COPY INTO does not support joins with other tables; it is designed for ingestion, not complex transformations. Columns can be omitted implicitly by not selecting them, but this is not considered a transformation feature in the context of Snowpipe exam questions. The ON_ERROR option is an error-handling configuration, not a transformation.
SnowPro Architect candidates are expected to recognize that COPY INTO and Snowpipe are ingestion-focused tools. More complex transformations should be handled downstream using streams and tasks, dynamic tables, or transformation frameworks like dbt.
=========
QUESTION NO: 15 [Security and Access Management]
A company wants to share selected product and sales tables with global partners. The partners are not Snowflake customers but do have access to AWS.
Requirements:
Data access must be governed.
Each partner should only have access to data from its respective region.What is the MOST secure and cost-effective solution?
A. Create reader accounts and share custom secure views.
B. Create an outbound share and share custom secure views.
C. Export secure views to each partner’s Amazon S3 bucket.
D. Publish secure views on the Snowflake Marketplace.
Answer: A
When sharing data with partners who are not Snowflake customers, Snowflake reader accounts provide the most secure and cost-effective solution. Reader accounts allow data providers to host and govern access within their own Snowflake environment while allowing consumers to query shared data without owning a Snowflake account (Answer A). This ensures strong governance, centralized billing, and no data movement.
By sharing custom secure views, the company can enforce row-level and column-level security so that each partner only sees data from its authorized region. Outbound shares require the consumer to have their own Snowflake account, which is not the case here. Exporting data to S3 introduces unnecessary data duplication, security risk, and operational overhead. Snowflake Marketplace is designed for broad distribution, not partner-specific regional restrictions.
For the SnowPro Architect exam, this question highlights best practices in secure data sharing, governance, and cost control when collaborating with external, non-Snowflake partners.
An Architect has a table called leader_follower that contains a single column named JSON. The table has one row with the following structure:
{
"activities": [
{ "activityNumber": 1, "winner": 5 },
{ "activityNumber": 2, "winner": 4 }
],
"follower": {
"name": { "default": "Matt" },
"number": 4
},
"leader": {
"name": { "default": "Adam" },
"number": 5
}
}
Which query will produce the following results?
ACTIVITY_NUMBER
WINNER_NAME
1
Adam
2
Matt
SELECT lf.json:activities.activityNumber AS activity_number,
IFF(
lf.json:activities.activityNumber = lf.json:leader.number,
lf.json:leader.name.default,
lf.json:follower.name.default
)::VARCHAR
FROM leader_follower lf;
SELECT
value:activityNumber AS activity_number,
IFF(
value:winner = lf.json:leader.number,
lf.json:leader.name.default,
lf.json:follower.name.default
)::VARCHAR AS winner_name
FROM leader_follower lf,
LATERAL FLATTEN(input => json:activities) p;
SELECT
value:activityNumber AS activity_number,
IFF(
value:winner = lf.json:leader.number,
lf.json:leader,
lf.json:follower
)::VARCHAR AS winner_name
FROM leader_follower lf,
LATERAL FLATTEN(input => json:activities) p;
This question tests several core Snowflake semi-structured data concepts that are explicitly part of the SnowPro Architect exam scope: working with VARIANT data, array handling, and the use of LATERAL FLATTEN. The activities element in the JSON structure is an array, meaning it must be flattened before individual attributes such as activityNumber and winner can be accessed. Option A is invalid because it attempts to directly reference fields inside an array without flattening it.
Option B correctly uses LATERAL FLATTEN on json:activities, which produces one row per activity. The alias p.value represents each array element, allowing access to activityNumber and winner. The IFF expression then compares the activity’s winner value with leader.number. When they match, the query returns leader.name.default; otherwise, it returns follower.name.default. Casting the result to VARCHAR ensures a proper scalar output.
Option C is incorrect because it attempts to return full JSON objects (leader or follower) rather than the nested name.default value. Option D uses OUTER => TRUE, which is unnecessary in this case because the activities array is guaranteed to exist; while it would still work, Snowflake exam questions typically expect the most precise and minimal correct solution.
A company needs to have the following features available in its Snowflake account:
1. Support for Multi-Factor Authentication (MFA)
2. A minimum of 2 months of Time Travel availability
3. Database replication in between different regions
4. Native support for JDBC and ODBC
5. Customer-managed encryption keys using Tri-Secret Secure
6. Support for Payment Card Industry Data Security Standards (PCI DSS)
In order to provide all the listed services, what is the MINIMUM Snowflake edition that should be selected during account creation?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
According to the Snowflake documentation1, the Business Critical edition offers the following features that are relevant to the question:
Support for Multi-Factor Authentication (MFA): This is a standard feature available in all Snowflake editions1.
A minimum of 2 months of Time Travel availability: This is an enterprise feature that allows users to access historical data for up to 90 days1.
Database replication in between different regions: This is an enterprise feature that enables users to replicate databases across different regions or cloud platforms1.
Native support for JDBC and ODBC: This is a standard feature available in all Snowflake editions1.
Customer-managed encryption keys using Tri-Secret Secure: This is a business critical feature that provides enhanced security and data protection by allowing customers to manage their own encryption keys1.
Support for Payment Card Industry Data Security Standards (PCI DSS): This is a business critical feature that ensures compliance with PCI DSS regulations for handling sensitive cardholder data1.
Therefore, the minimum Snowflake edition that should be selected during account creation to provide all the listed services is the Business Critical edition.
An Architect wants to stream website logs near real time to Snowflake using the Snowflake Connector for Kafka.
What characteristics should the Architect consider regarding the different ingestion methods? (Select TWO).
Snowpipe Streaming is the default ingestion method.
Snowpipe Streaming supports schema detection.
Snowpipe has lower latency than Snowpipe Streaming.
Snowpipe Streaming automatically flushes data every one second.
Snowflake can handle jumps or resetting offsets by default.
When using the Snowflake Connector for Kafka, architects must understand the behavior differences between Snowpipe (file-based) and Snowpipe Streaming. Snowpipe Streaming is optimized for low-latency ingestion and works by continuously sending records directly into Snowflake-managed channels rather than staging files. One important characteristic is that Snowpipe Streaming automatically flushes buffered records at short, fixed intervals (approximately every second), ensuring near real-time data availability (Answer D).
Another key consideration is offset handling. The Snowflake Connector for Kafka is designed to tolerate Kafka offset jumps or resets, such as those caused by topic reprocessing or consumer group changes. Snowflake can safely ingest records without corrupting state, relying on Kafka semantics and connector metadata to maintain consistency (Answer E).
Snowpipe Streaming is not always the default ingestion method; configuration determines whether file-based Snowpipe or Streaming is used. Schema detection is not supported in Snowpipe Streaming. Traditional Snowpipe does not offer lower latency than Snowpipe Streaming. For the SnowPro Architect exam, understanding ingestion latency, buffering behavior, and fault tolerance is essential when designing streaming architectures.
=========
QUESTION NO: 57 [Snowflake Data Engineering]
An Architect wants to create an externally managed Iceberg table in Snowflake.
What parameters are required? (Select THREE).
A. External volume
B. Storage integration
C. External stage
D. Data file path
E. Catalog integration
F. Metadata file path
Answer: A, E, F
Externally managed Iceberg tables in Snowflake rely on external systems for metadata and storage management. An external volume is required to define and manage access to the underlying cloud storage where the Iceberg data files reside (Answer A). A catalog integration is required so Snowflake can interact with the external Iceberg catalog (such as AWS Glue or other supported catalogs) that manages table metadata (Answer E).
Additionally, Snowflake must know the location of the Iceberg metadata files (the Iceberg metadata JSON), which is provided via the metadata file path parameter (Answer F). This allows Snowflake to read schema and snapshot information maintained externally.
An external stage is not required for Iceberg tables, as Snowflake accesses the data directly through the external volume. A storage integration is used for stages, not for Iceberg tables. The data file path is derived from metadata and does not need to be specified explicitly. This question tests SnowPro Architect understanding of modern open table formats and Snowflake’s Iceberg integration model.
=========
QUESTION NO: 58 [Security and Access Management]
A company stores customer data in Snowflake and must protect Personally Identifiable Information (PII) to meet strict regulatory requirements.
What should an Architect do?
A. Use row-level security to mask PII data.
B. Use tag-based masking policies for columns containing PII.
C. Create secure views for PII data and grant access as needed.
D. Separate PII into different tables and grant access as needed.
Answer: B
Tag-based masking policies provide a scalable and centralized way to protect PII across many tables and schemas (Answer B). By tagging columns that contain PII and associating masking policies with those tags, Snowflake automatically enforces masking rules wherever the tagged columns appear. This approach reduces administrative overhead and ensures consistent enforcement as schemas evolve.
Row access policies control row visibility, not column masking. Secure views and table separation can protect data but introduce significant maintenance complexity and do not scale well across large environments. Snowflake best practices—and the SnowPro Architect exam—emphasize tag-based governance for sensitive data.
=========
QUESTION NO: 59 [Security and Access Management]
An Architect created a data share and wants to verify that only specific records in secure views are visible to consumers.
What is the recommended validation method?
A. Create reader accounts and log in as consumers.
B. Create a row access policy and assign it to the share.
C. Set the SIMULATED_DATA_SHARING_CONSUMER session parameter.
D. Alter the share to impersonate a consumer account.
Answer: C
Snowflake provides the SIMULATED_DATA_SHARING_CONSUMER session parameter to allow providers to test how shared data appears to specific consumer accounts without logging in as those consumers (Answer C). This feature enables secure, efficient validation of row-level and column-level filtering logic implemented through secure views.
Creating reader accounts is unnecessary and operationally heavy. Row access policies are part of access control design, not validation. Altering a share does not provide impersonation capabilities. This question tests SnowPro Architect familiarity with governance validation tools in Secure Data Sharing scenarios.
=========
QUESTION NO: 60 [Architecting Snowflake Solutions]
Which requirements indicate that a multi-account Snowflake strategy should be used? (Select TWO).
A. A requirement to use different Snowflake editions.
B. A requirement for easy object promotion using zero-copy cloning.
C. A requirement to use Snowflake in a single cloud or region.
D. A requirement to minimize complexity of changing database names across environments.
E. A requirement to use RBAC to govern DevOps processes across environments.
Answer: A, B
A multi-account Snowflake strategy is appropriate when environments have fundamentally different requirements. Using different Snowflake editions (for example, Business Critical for production and Enterprise for non-production) requires separate accounts because edition is an account-level property (Answer A).
Zero-copy cloning is frequently used for fast environment refresh and object promotion, but cloning only works within a single account. To promote data between environments cleanly, many organizations use separate accounts combined with replication or sharing strategies, making multi-account design relevant when environment isolation and promotion workflows are required (Answer B).
Single-region usage, minimizing database name changes, and RBAC governance can all be handled within a single account. This question reinforces SnowPro Architect principles around environment isolation, governance, and account-level design decisions.
Which security, governance, and data protection features require, at a MINIMUM, the Business Critical edition of Snowflake? (Choose two.)
Extended Time Travel (up to 90 days)
Customer-managed encryption keys through Tri-Secret Secure
Periodic rekeying of encrypted data
AWS, Azure, or Google Cloud private connectivity to Snowflake
Federated authentication and SSO
According to the SnowPro Advanced: Architect documents and learning resources, the security, governance, and data protection features that require, at a minimum, the Business Critical edition of Snowflake are:
Customer-managed encryption keys through Tri-Secret Secure. This feature allows customers to manage their own encryption keys for data at rest in Snowflake, using a combination of three secrets: a master key, a service key, and a security password. This provides an additional layer of security and control over the data encryption and decryption process1.
Periodic rekeying of encrypted data. This feature allows customers to periodically rotate the encryption keys for data at rest in Snowflake, using either Snowflake-managed keys or customer-managed keys. This enhances the security and protection of the data by reducing the risk of key compromise or exposure2.
The other options are incorrect because they do not require the Business Critical edition of Snowflake. Option A is incorrect because extended Time Travel (up to 90 days) is available with the Enterprise edition of Snowflake3. Option D is incorrect because AWS, Azure, or Google Cloud private connectivity to Snowflake is available with the Standard edition of Snowflake4. Option E is incorrect because federated authentication and SSO are available with the Standard edition of Snowflake5. References: Tri-Secret Secure | Snowflake Documentation, Periodic Rekeying of Encrypted Data | Snowflake Documentation, Snowflake Editions | Snowflake Documentation, Snowflake Network Policies | Snowflake Documentation, Configuring Federated Authentication and SSO | Snowflake Documentation
How do Snowflake databases that are created from shares differ from standard databases that are not created from shares? (Choose three.)
Shared databases are read-only.
Shared databases must be refreshed in order for new data to be visible.
Shared databases cannot be cloned.
Shared databases are not supported by Time Travel.
Shared databases will have the PUBLIC or INFORMATION_SCHEMA schemas without explicitly granting these schemas to the share.
Shared databases can also be created as transient databases.
According to the SnowPro Advanced: Architect documents and learning resources, the ways that Snowflake databases that are created from shares differ from standard databases that are not created from shares are:
Shared databases are read-only. This means that the data consumers who access the shared databases cannot modify or delete the data or the objects in the databases. The data providers who share the databases have full control over the data and the objects, and can grant or revoke privileges on them1.
Shared databases cannot be cloned. This means that the data consumers who access the shared databases cannot create a copy of the databases or the objects in the databases. The data providers who share the databases can clone the databases or the objects, but the clones are not automatically shared2.
Shared databases are not supported by Time Travel. This means that the data consumers who access the shared databases cannot use the AS OF clause to query historical data or restore deleted data. The data providers who share the databases can use Time Travel on the databases or the objects, but the historical data is not visible to the data consumers3.
The other options are incorrect because they are not ways that Snowflake databases that are created from shares differ from standard databases that are not created from shares. Option B is incorrect because shared databases do not need to be refreshed in order for new data to be visible. The data consumers who access the shared databases can see the latest data as soon as the data providers update the data1. Option E is incorrect because shared databases will not have the PUBLIC or INFORMATION_SCHEMA schemas without explicitly granting these schemas to the share. The data consumers who access the shared databases can only see the objects that the data providers grant to the share, and the PUBLIC and INFORMATION_SCHEMA schemas are not granted by default4. Option F is incorrect because shared databases cannot be created as transient databases. Transient databases are databases that do not support Time Travel or Fail-safe, and can be dropped without affecting the retention period of the data. Shared databases are always created as permanent databases, regardless of the type of the source database5. References: Introduction to Secure Data Sharing |Snowflake Documentation, Cloning Objects | Snowflake Documentation, Time Travel | Snowflake Documentation, Working with Shares | Snowflake Documentation, CREATE DATABASE | Snowflake Documentation
The following statements have been executed successfully:
USE ROLE SYSADMIN;
CREATE OR REPLACE DATABASE DEV_TEST_DB;
CREATE OR REPLACE SCHEMA DEV_TEST_DB.SCHTEST WITH MANAGED ACCESS;
GRANT USAGE ON DATABASE DEV_TEST_DB TO ROLE DEV_PROJ_OWN;
GRANT USAGE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE DEV_PROJ_OWN;
GRANT USAGE ON DATABASE DEV_TEST_DB TO ROLE ANALYST_PROJ;
GRANT USAGE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE ANALYST_PROJ;
GRANT CREATE TABLE ON SCHEMA DEV_TEST_DB.SCHTEST TO ROLE DEV_PROJ_OWN;
USE ROLE DEV_PROJ_OWN;
CREATE OR REPLACE TABLE DEV_TEST_DB.SCHTEST.CURRENCY (
COUNTRY VARCHAR(255),
CURRENCY_NAME VARCHAR(255),
ISO_CURRENCY_CODE VARCHAR(15),
CURRENCY_CD NUMBER(38,0),
MINOR_UNIT VARCHAR(255),
WITHDRAWAL_DATE VARCHAR(255)
);
The role hierarchy is as follows (simplified from the diagram):
ACCOUNTADMIN└─ DEV_SYSADMIN└─ DEV_PROJ_OWN└─ ANALYST_PROJ
Separately:
ACCOUNTADMIN└─ SYSADMIN└─ MAPPING_ROLE
Which statements will return the records from the table
DEV_TEST_DB.SCHTEST.CURRENCY? (Select TWO)

USE ROLE DEV_PROJ_OWN;
GRANT SELECT ON DEV_TEST_DB.SCHTEST.CURRENCY TO ROLE ANALYST_PROJ;
USE ROLE ANALYST_PROJ;
SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;
USE ROLE DEV_PROJ_OWN;
SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;
USE ROLE SYSADMIN;
SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;
USE ROLE MAPPING_ROLE;
SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;
USE ROLE ACCOUNTADMIN;
SELECT * FROM DEV_TEST_DB.SCHTEST.CURRENCY;
This question evaluates deep understanding of Snowflake RBAC, managed access schemas, and privilege inheritance, all of which are core SnowPro Architect topics. The schema DEV_TEST_DB.SCHTEST is created WITH MANAGED ACCESS, meaning that only the schema owner (or a higher role in the hierarchy) can grant object privileges such as SELECT on tables within the schema.
The table CURRENCY is created by the role DEV_PROJ_OWN, making it the table owner. As the owner, DEV_PROJ_OWN implicitly has full privileges on the table, including SELECT. Therefore, option B succeeds because the role selecting the data owns the table.
In option A, DEV_PROJ_OWN explicitly grants SELECT on the table to ANALYST_PROJ. Since this grant is performed by the schema/table owner and the role ANALYST_PROJ already has USAGE on both the database and schema, the subsequent SELECT succeeds. This makes A valid.
Option C fails because SYSADMIN does not inherit privileges from DEV_SYSADMIN or DEV_PROJ_OWN; Snowflake role inheritance is directional and not lateral. Option D fails for the same reason—MAPPING_ROLE has no privileges on the database or schema. Option E fails because ACCOUNTADMIN does not automatically bypass RBAC; without explicit USAGE and SELECT, access is denied.
An Architect is using SnowCD to investigate a connectivity issue.
Which system function will provide a list of endpoints that the network must be able to access to use a specific Snowflake account, leveraging private connectivity?
SYSTEMSALLOWLIST ()
SYSTEMSGET_PRIVATELINK
SYSTEMSAUTHORIZE_PRIVATELINK
SYSTEMSALLOWLIST_PRIVATELINK ()
TheSYSTEM$GET_PRIVATELINKfunction is used to retrieve the list of Snowflake service endpoints that need to be accessible when configuring private connectivity (such as AWS PrivateLink or Azure Private Link) for a Snowflake account. The function returns information necessary for setting up the networking infrastructure that allows secure and private access to Snowflake without using the public internet. SnowCD can then be used to verify connectivity to these endpoints.
Data is being imported and stored as JSON in a VARIANT column. Query performance was fine, but most recently, poor query performance has been reported.
What could be causing this?
There were JSON nulls in the recent data imports.
The order of the keys in the JSON was changed.
The recent data imports contained fewer fields than usual.
There were variations in string lengths for the JSON values in the recent data imports.
Data is being imported and stored as JSON in a VARIANT column. Query performance was fine, but most recently, poor query performance has been reported. This could be caused by the following factors:
The order of the keys in the JSON was changed. Snowflake stores semi-structured data internally in a column-like structure for the most common elements, and the remainder in a leftovers-like column. The order of the keys in the JSON affects how Snowflake determines the common elements and how it optimizes the query performance. If the order of the keys in the JSON was changed, Snowflake might have to re-parse the data and re-organize the internal storage, which could result in slower query performance.
There were variations in string lengths for the JSON values in the recent data imports. Non-native values, such as dates and timestamps, are stored as strings when loaded into a VARIANT column. Operations on these values could be slower and also consume more space than when stored in a relational column with the corresponding data type. If there were variations in string lengths for the JSON values in the recent data imports, Snowflake might have to allocate more space and perform more conversions, which could also result in slower query performance.
The other options are not valid causes for poor query performance:
There were JSON nulls in the recent data imports. Snowflake supports two types of null values in semi-structured data: SQL NULL and JSON null. SQL NULL means the value is missing or unknown, while JSON null means the value is explicitly set to null. Snowflake can distinguish between these two types of null values and handle them accordingly. Having JSON nulls in the recent data imports should not affect the query performance significantly.
The recent data imports contained fewer fields than usual. Snowflake can handle semi-structured data with varying schemas and fields. Having fewer fields than usual in the recent data imports should not affect the query performance significantly, as Snowflake can still optimize the data ingestion and query execution based on the existing fields.
Considerations for Semi-structured Data Stored in VARIANT
Snowflake Architect Training
Snowflake query performance on unique element in variant column
Snowflake variant performance
When loading data from stage using COPY INTO, what options can you specify for the ON_ERROR clause?
CONTINUE
SKIP_FILE
ABORT_STATEMENT
FAIL
The ON_ERROR clause is an optional parameter for the COPY INTO command that specifies the behavior of the command when it encounters errors in the files. The ON_ERROR clause can have one of the following values1:
CONTINUE: This value instructs the command to continue loading the file and return an error message for a maximum of one error encountered per data file. The difference between the ROWS_PARSED and ROWS_LOADED column values represents the number of rows that include detected errors. To view all errors in the data files, use the VALIDATION_MODE parameter or query the VALIDATE function1.
SKIP_FILE: This value instructs the command to skip the file when it encounters a data error on any of the records in the file. The command moves on to the next file in the stage and continues loading. The skipped file is not loaded and no error message is returned for the file1.
ABORT_STATEMENT: This value instructs the command to stop loading data when the first error is encountered. The command returns an error message for the file and aborts the load operation. This is the default value for the ON_ERROR clause1.
Therefore, options A, B, and C are correct.
COPY INTO