Which command is used to upload data files from a local directory or folder on a client machine to an internal stage, for a specified table?
GET
PUT
CREATE STREAM
COPY INTO
To upload data files from a local directory or folder on a client machine to an internal stage in Snowflake, the PUT command is used. The PUT command takes files from the local file system and uploads them to an internal Snowflake stage (or a specified stage) for the purpose of preparing the data to be loaded into Snowflake tables.
Syntax Example:
PUT file://
This command is crucial for data ingestion workflows in Snowflake, especially when preparing to load data using the COPY INTO command.
What are the main differences between the account usage views and the information schema views? (Select TWO).
No active warehouse to needed to query account usage views but one is needed to query information schema views.
Account usage views do not contain data about tables but information schema views do.
Account issue views contain dropped objects but information schema views do not.
Data retention for account usage views is 1 year but is 7 days to 6 months for information schema views, depending on the view.
Information schema views are read-only but account usage views are not.
The account usage views in Snowflake provide historical usage data about the Snowflake account, and they retain this data for a period of up to 1 year. These views include information about dropped objects, enabling audit and tracking activities. On the other hand, information schema views provide metadata about database objects currently in use, such as tables and views, but do not include dropped objects. The retention of data in information schema views varies, but it is generally shorter than the retention for account usage views, ranging from 7 days to a maximum of 6 months, depending on the specific view.References: Snowflake Documentation on Account Usage and Information Schema
Which table function should be used to view details on a Directed Acyclic Graphic (DAG) run that is presently scheduled or is executing?
TASK_HISTORY
TASK_DEPENDENTS
CURRENT_TASK_GRAPHS
COMPLETE_TASK_GRAPHS
The CURRENT_TASK_GRAPHS table function is designed to provide information on Directed Acyclic Graphs (DAGs) that are currently scheduled or executing within Snowflake. This function offers insights into the structure and status of task chains, enabling users to monitor and troubleshoot task executions. DAGs in Snowflake represent sequences of tasks with dependencies, and understanding their current state is crucial for managing complex workflows.References: Snowflake Documentation on Task Management
How does the Access_History view enhance overall data governance pertaining to read and write operations? (Select TWO).
Shows how the accessed data was moved from the source lo the target objects
Provides a unified picture of what data was accessed and when it was accessed
Protects sensitive data from unauthorized access while allowing authorized users to access it at query runtime
Identifies columns with personal information and tags them so masking policies can be applied to protect sensitive data
Determines whether a given row in a table can be accessed by the user by filtering the data based on a given policy
The ACCESS_HISTORY view in Snowflake is a powerful tool for enhancing data governance, especially concerning monitoring and auditing data access patterns for both read and write operations. The key ways in which ACCESS_HISTORY enhances overall data governance are:
ACCESS_HISTORY does not automatically apply data masking or tag columns with personal information. However, the insights derived from analyzing ACCESS_HISTORY can be used to identify sensitive data and inform the application of masking policies or other security measures.
References:
Which function returns the URL of a stage using the stage name as the input?
BUILD_STAGE_FILE_URL
BUILD_SCOPED_FILE_URL
GET_PRESIGNED_URL
GET STAGE LOCATION
The function in Snowflake that returns the URL of a stage using the stage name as the input is C. GET_PRESIGNED_URL. This function generates a pre-signed URL for a specific file in a stage, enabling secure, temporary access to that file without requiring Snowflake credentials. While the function is primarily used for accessing files in external stages, such as Amazon S3 buckets, it is instrumental in scenarios requiring direct, secure file access for a limited time.
It's important to note that as of my last update, Snowflake's documentation does not specifically list a function named GET_PRESIGNED_URL for directly obtaining a stage's URL by its name. The description aligns closely with functionality available in cloud storage services (e.g., AWS S3's presigned URLs) which can be used in conjunction with Snowflake stages for secure, temporary access to files. For direct interaction with stages and their files, Snowflake offers various functions and commands, but the exact match for generating a presigned URL through a simple function call may vary or require leveraging external cloud services APIs in addition to Snowflake's capabilities.
References:
Authorization to execute CREATE
Primary role
Secondary role
Application role
Database role
In Snowflake, the authorization to execute CREATE <object> statements, such as creating tables, views, databases, etc., is determined by the role currently set as the user's primary role. The primary role of a user or session specifies the set of privileges (including creation privileges) that the user has. While users can have multiple roles, only the primary role is used to determine what objects the user can create unless explicitly specified in the session.
Which Snowflake data governance feature can support auditing when a user query reads column data?
Access History
Data classification
Column-level security
Object dependencies
Access History in Snowflake is a feature designed to support auditing by tracking access to data within Snowflake, including when a user's query reads column data. It provides detailed information on queries executed, including the user who ran the query, the query text, and the objects (e.g., tables, views) accessed by the query. This feature is instrumental for auditing purposes, helping organizations to monitor and audit data access for security and compliance.
What type of account can be used to share data with a consumer who does have a Snowflake account?
Data provider
Data consumer
Reader
Organization
A Reader account in Snowflake can be used to share data with a consumer who does not have a Snowflake account. Reader accounts are a type of shared account provided by data providers to external data consumers, allowing them to access and query shared data using Snowflake's web interface without needing their own Snowflake account.
References:
Which types of subqueries does Snowflake support? (Select TWO).
Uncorrelated scalar subqueries in WHERE clauses
Uncorrelated scalar subqueries in any place that a value expression can be used
EXISTS, ANY / ALL, and IN subqueries in WHERE clauses: these subqueries can be uncorrelated only
EXISTS, ANY / ALL, and IN subqueries in where clauses: these subqueries can be correlated only
EXISTS, ANY /ALL, and IN subqueries in WHERE clauses: these subqueries can be correlated or uncorrelated
Snowflake supports a variety of subquery types, including both correlated and uncorrelated subqueries. The correct answers are B and E, which highlight Snowflake's flexibility in handling subqueries within SQL queries.
SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);
SELECT * FROM orders o WHERE EXISTS (SELECT 1 FROM customer c WHERE c.id = o.customer_id AND c.region = 'North America');
Which Snowflake database object can be shared with other accounts?
Tasks
Pipes
Secure User-Defined Functions (UDFs)
Stored Procedures
In Snowflake, Secure User-Defined Functions (UDFs) can be shared with other accounts using Snowflake's data sharing feature. This allows different Snowflake accounts to securely execute the UDFs without having direct access to the underlying data the functions operate on, ensuring privacy and security. The sharing is facilitated through shares created in Snowflake, which can contain Secure UDFs along with other database objects like tables and views.References: Snowflake Documentation on Data Sharing and Secure UDFs
When should a stored procedure be created with caller's rights?
When the caller needs to be prevented from viewing the source code of the stored procedure
When the caller needs to run a statement that could not execute outside of the stored procedure
When the stored procedure needs to run with the privileges of the role that called the stored procedure
When the stored procedure needs to operate on objects that the caller does not have privileges on
Stored procedures in Snowflake can be created with either 'owner's rights' or 'caller's rights'. A stored procedure created with caller's rights executes with the privileges of the role that calls the procedure, not the privileges of the role that owns the procedure. This is particularly useful in scenarios where the procedure needs to perform operations that depend on the caller's access permissions, ensuring that the procedure can only access objects that the caller is authorized to access.
The property mins_to_bypass_network_policy is set at which level?
User
Role
Account
Organization
The property mins_to_bypass_network_policy is set at the account level in Snowflake. This setting allows administrators to specify a time frame during which users can bypass network policies that have been set on their account. It is particularly useful in scenarios where temporary access needs to be granted from IPs not covered by the existing network policies. By adjusting this property at the account level, Snowflake administrators can manage and enforce network access controls efficiently across the entire account.
References:
Which activities are managed by Slowflake's Cloud Services layer? (Select TWO).
Authorisation
Access delegation
Data pruning
Data compression
Query parsing and optimization
Snowflake's Cloud Services layer is responsible for managing various aspects of the platform that are not directly related to computing or storage. Specifically, it handles authorisation, ensuring that users have appropriate access rights to perform actions or access data. Additionally, it takes care of query parsing and optimization, interpreting SQL queries and optimizing their execution plans for better performance. This layer abstracts much of the platform's complexity, allowing users to focus on their data and queries without managing the underlying infrastructure.References: Snowflake Architecture Documentation
Which statements reflect valid commands when using secondary roles? (Select TWO).
Use SECONDARY ROLES RESUME
USE SECONDARY ROLES SUSPEND
USE SECONDARY RLES ALL
USE SECONDARY ROLES ADD
Use SECONDARY ROLES NONE
References:
Which Snowflake table type persists until it is explicitly dropped. is available for all users with relevant privileges (across sessions). and has no Fail-safe period?
External
Permanent
Temporary
Transient
The type of Snowflake table that persists until it is explicitly dropped, is available for all users with relevant privileges across sessions, and does not have a Fail-safe period, is a Transient table. Transient tables are designed to provide temporary storage similar to permanent tables but with some reduced storage costs and without the Fail-safe feature, which provides additional data protection for a period beyond the retention time. Transient tables are useful in scenarios where data needs to be temporarily stored for longer than a session but does not require the robust durability guarantees of permanent tables.
Which roles can make grant decisions to objects within a managed access schema? (Select TWO)
ACCOUNTADMIN
SECURITYADMIN
SYSTEMADMIN
ORGADMIN
USERADMIN
Which Snowflake privilege is required on a pipe object to pause or resume pipes?
OPERATE
READ
SELECT
USAGE
OPERATE. In Snowflake, to pause or resume a pipe, the OPERATE privilege is required on the pipe object. The OPERATE privilege allows users to perform operational tasks on specific objects such as pipes, tasks, and streams. Specifically, for a pipe, the OPERATE privilege enables the user to execute the ALTER PIPE ... SET PIPE_EXECUTION_PAUSED=TRUE or ALTER PIPE ... SET PIPE_EXECUTION_PAUSED=FALSE commands, which are used to pause or resume the pipe, respectively.
Here's a step-by-step explanation and reference:
GRANT OPERATE ON PIPE
ALTER PIPE
To resume the pipe, they use:
ALTER PIPE
Which object type is granted permissions for reading a table?
User
Role
Attribute
Schema
In Snowflake, permissions for accessing database objects, including tables, are not granted directly to users but rather to roles. A role encapsulates a collection of privileges on various Snowflake objects. Users are then granted roles, and through those roles, they inherit the permissions necessary to read a table or perform other actions. This approach adheres to the principle of least privilege, allowing for granular control over database access and simplifying the management of user permissions.
How many credits does a size 3X-Large virtual warehouse consume if it runs continuously for 2 hours?
32
64
128
256
In Snowflake, the consumption of credits by a virtual warehouse is determined by its size and the duration for which it runs. A size 3X-Large virtual warehouse consumes 128 credits if it runs continuously for 2 hours. This consumption rate is based on the principle that larger warehouses, capable of providing greater computational resources and throughput, consume more credits per hour of operation. The specific rate of consumption is defined by Snowflake’s pricing model and the scale of the virtual warehouse.References: Snowflake Pricing Documentation
A stream can be created on which Snowflake objects to record data manipulation language(DML) changes? (Select TWO).
Database
Standard tables
Standard tables
Standard views
Schemas
Pipes
Snowflake streams are objects that enable users to track changes (inserts, updates, and deletes) to the data in tables, facilitating real-time data processing and integration workflows. Streams can be created on standard tables, capturing DML changes made to these tables so that downstream processes can consume the changes incrementally. This feature supports efficient data ETL, replication, and real-time analytics by providing a mechanism to process only the data that has changed. Note: The correct options should likely include a distinction between "Standard tables" and another object type such as "External tables" rather than repeating "Standard tables" twice.References: Snowflake Documentation on Streams
Which key access control concept does Snowflake descibe as a defined level of access to an object?
Grant
Privilege
Role
Session
In Snowflake, the term "privilege" refers to a defined level of access to an object. Privileges are specific actions that roles can perform on securable objects in Snowflake, such as tables, views, warehouses, databases, and schemas. These privileges are granted to roles and can be further granted to users through their roles, forming the basis of Snowflake’s access control framework.References: Snowflake Documentation on Access Control Privileges
What does the Activity area of Snowsight allow users to do? (Select TWO).
Schedule automated data backups.
Explore each step of an executed query.
Monitor queries executed by users in an account.
Create and manage user roles and permissions.
Access Snowflake Marketplace to find and integrate datasets.
The Activity area of Snowsight, Snowflake's web interface, allows users to perform several important tasks related to query management and performance analysis. Among the options provided, the correct ones are:
These features are crucial for effective query performance tuning and ensuring efficient use of Snowflake's resources.
References:
Given the statement template below, which database objects can be added to a share?(Select TWO).
GRANT
Secure functions
Stored procedures
Streams
Tables
Tasks
In Snowflake, shares are used to share data across different Snowflake accounts securely. When you create a share, you can include various database objects that you want to share with consumers. According to Snowflake's documentation, the types of objects that can be shared include tables, secure views, secure materialized views, and streams. Secure functions and stored procedures are not shareable objects. Tasks also cannot be shared directly. Therefore, the correct answers are streams (C) and tables (D).
To share a stream or a table, you use the GRANT statement to grant privileges on these objects to a share. The syntax for sharing a table or stream involves specifying the type of object, the object name, and the share to which you are granting access. For example:
GRANT SELECT ON TABLE my_table TO SHARE my_share; GRANT SELECT ON STREAM my_stream TO SHARE my_share;
These commands grant the SELECT privilege on a table named my_table and a stream named my_stream to a share named my_share. This enables the consumer of the share to access these objects according to the granted privileges.
What happens to the privileges granted to Snowflake system-defined roles?
The privileges cannot be revoked.
The privileges can be revoked by an ACCOUNTADMIN.
The privileges can be revoked by an orgadmin.
The privileges can be revoked by any user-defined role with appropriate privileges.
The privileges granted to Snowflake's system-defined roles cannot be revoked. System-defined roles, such as SYSADMIN, ACCOUNTADMIN, SECURITYADMIN, and others, come with a set of predefined privileges that are essential for the roles to function correctly within the Snowflake environment. These privileges are intrinsic to the roles and ensure that users assigned these roles can perform the necessary tasks and operations relevant to their responsibilities.
The design of Snowflake's role-based access control (RBAC) model ensures that system-defined roles have a specific set of non-revocable privileges to maintain the security, integrity, and operational efficiency of the Snowflake environment. This approach prevents accidental or intentional modification of privileges that could disrupt the essential functions or compromise the security of the Snowflake account.
References:
Which system_defined, read-only view display information on column lineage that specifies how data flows from source to target in a SQL write operation?
ACCESS_HISTORY
LOAD_HOSTORY
QUERY_HISTORY
COPY_HISTORY
In Snowflake, the system-defined, read-only view that displays information on column lineage, which specifies how data flows from source to target in a SQL write operation, is ACCESS_HISTORY. This view is instrumental in auditing and analyzing data access patterns, as it provides detailed insights into how and from where the data is being accessed and manipulated within Snowflake.
Reference to Snowflake documentation on ACCESS_HISTORY:
Which semi-structured file format is a compressed, efficient, columnar data representation?
Avro
JSON
TSV
Parquet
Parquet is a columnar storage file format that is optimized for efficiency in both storage and processing. It supports compression and encoding schemes that significantly reduce the storage space needed and speed up data retrieval operations, making it ideal for handling large volumes of data. Unlike JSON or TSV, which are row-oriented and typically uncompressed, Parquet is designed specifically for use with big data frameworks, offering advantages in terms of performance and cost when storing and querying semi-structured data.References: Apache Parquet Documentation
Which views are included in the data_sharing_usage schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
https://docs.snowflake.com/en/sql-reference/data-sharing-usage
Which categories are included in the execution time summary in a Query Profile? (Select TWO).
Pruning
Spilling
Initialization
Local Disk I/O
Percentage of data read from cache
In the execution time summary of a Query Profile in Snowflake, the categories included provide insights into various aspects of query execution. "Pruning" refers to the process by which Snowflake reduces the amount of data scanned by eliminating partitions of data that are not relevant to the query, thus improving performance. "Initialization" represents the time taken for query planning and setup before actual execution begins. These metrics are crucial for understanding and optimizing query performance.
What causes objects in a data share to become unavailable to a consumer account?
The DATA_RETENTI0N_TIME_IN_DAYS parameter in the consumer account is set to 0.
The consumer account runs the GRANT IMPORTED PRIVILEGES command on the data share every 24 hours.
The objects in the data share are being deleted and the grant pattern is not re-applied systematically.
The consumer account acquires the data share through a private data exchange.
Objects in a data share become unavailable to a consumer account if the objects in the data share are deleted or if the permissions on these objects are altered without re-applying the grant permissions systematically. This is because the sharing mechanism in Snowflake relies on explicit grants of permissions on specific objects (like tables, views, or secure views) to the share. If these objects are deleted or if their permissions change without updating the share accordingly, consumers can lose access.
The DATA_RETENTION_TIME_IN_DAYS parameter does not directly affect the availability of shared objects, as it controls how long Snowflake retains historical data for time travel and does not impact data sharing permissions.
Running the GRANT IMPORTED PRIVILEGES command in the consumer account is not related to the availability of shared objects; this command is used to grant privileges on imported objects within the consumer's account and is not a routine maintenance command that would need to be run regularly.
Acquiring a data share through a private data exchange does not inherently make objects unavailable; issues would only arise if there were problems with the share configuration or if the shared objects were deleted or had their permissions altered without re-granting access to the share.
When snaring data in Snowflake. what privileges does a Provider need to grant along with a share? (Select TWO).
USAGE on the specific tables in the database.
USAGE on the specific tables in the database.
MODIFY on 1Mb specific tables in the database.
USAGE on the database and the schema containing the tables to share
OPEBATE on the database and the schema containing the tables to share.
When sharing data in Snowflake, the provider needs to grant the following privileges along with a share:
These privileges are crucial for setting up secure and controlled access to the shared data, ensuring that only authorized users can access the specified resources.
Reference to Snowflake documentation on sharing data and managing access:
What is the MINIMUM size of a table for which Snowflake recommends considering adding a clustering key?
1 Kilobyte (KB)
1 Megabyte (MB)
1 Gigabyte (GB)
1 Terabyte (TB)
Snowflake recommends considering adding a clustering key to a table when its size reaches 1 Terabyte (TB) or larger. Clustering keys help optimize the storage and query performance by organizing the data in a table based on the specified columns. This is particularly beneficial for large tables where data retrieval can become inefficient without proper clustering.
CREATE TABLE my_table (... ) CLUSTER BY (column1, column2);
ALTER TABLE my_table CLUSTER BY (column1, column2);
How can a user get the MOST detailed information about individual table storage details in Snowflake?
SHOW TABLES command
SHOW EXTERNAL TABLES command
TABLES view
TABLE STORAGE METRICS view
To obtain the most detailed information about individual table storage details in Snowflake, the TABLE STORAGE METRICS view is the recommended option. This view provides comprehensive metrics on storage usage, including data size, time travel size, fail-safe size, and other relevant storage metrics for each table. This level of detail is invaluable for monitoring, managing, and optimizing storage costs and performance.
References:
Which Snowflake layer is associated with virtual warehouses?
Cloud services
Query processing
Elastic memory
Database storage
The layer of Snowflake's architecture associated with virtual warehouses is the Query Processing layer. Virtual warehouses in Snowflake are dedicated compute clusters that execute SQL queries against the stored data. This layer is responsible for the entire query execution process, including parsing, optimization, and the actual computation. It operates independently of the storage layer, enabling Snowflake to scale compute and storage resources separately for efficiency and cost-effectiveness.
References:
The VALIDATE table function has which parameter as an input argument for a Snowflake user?
Last_QUERY_ID
CURRENT_STATEMENT
UUID_STRING
JOB_ID
The VALIDATE table function in Snowflake would typically use a unique identifier, such as a UUID_STRING, as an input argument. This function is designed to validate the data within a table against a set of constraints or conditions, often requiring a specific identifier to reference the particular data or job being validated.
References:
What does the worksheet and database explorer feature in Snowsight allow users to do?
Add or remove users from a worksheet.
Move a worksheet to a folder or a dashboard.
Combine multiple worksheets into a single worksheet.
Tag frequently accessed worksheets for ease of access.
The worksheet and database explorer feature in Snowsight allows users to tag frequently accessed worksheets for ease of access. This functionality helps users organize and quickly navigate to the worksheets they use most often, enhancing productivity and streamlining the data exploration and analysis process within Snowsight, Snowflake's web-based query and visualization interface.
References:
What does the TableScan operator represent in the Query Profile?
The access to a single table
The access to data stored in stage objects
The list of values provided with the VALUES clause
The records generated using the TABLE (GENERATOR (...)) construct
In the Query Profile of Snowflake, the TableScan operator represents the access to a single table. This operator indicates that the query execution involved reading data from a table stored in Snowflake. TableScan is a fundamental operation in query execution plans, showing how the database engine retrieves data directly from tables as part of processing a query.
References:
How are network policies defined in Snowflake?
They are a set of rules that define the network routes within Snowflake.
They are a set of rules that dictate how Snowflake accounts can be used between multiple users.
They are a set of rules that define how data can be transferred between different Snowflake accounts within an organization.
They are a set of rules that control access to Snowflake accounts by specifying the IP addresses or ranges of IP addresses that are allowed to connect
to Snowflake.
Network policies in Snowflake are defined as a set of rules that manage the network-level access to Snowflake accounts. These rules specify which IP addresses or IP ranges are permitted to connect to Snowflake, enhancing the security of Snowflake accounts by preventing unauthorized access. Network policies are an essential aspect of Snowflake's security model, allowing administrators to enforce access controls based on network locations.
References:
What is the only supported character set for loading and unloading data from all supported file formats?
UTF-8
UTF-16
ISO-8859-1
WINDOWS-1253
UTF-8 is the only supported character set for loading and unloading data from all supported file formats in Snowflake. UTF-8 is a widely used encoding that supports a large range of characters from various languages, making it suitable for internationalization and ensuring data compatibility across different systems and platforms.
References:
Which function will provide the proxy information needed to protect Snowsight?
SYSTEMADMIN_TAG
SYSTEM$GET_PRIVATELINK
SYSTEMSALLONTLIST
SYSTEMAUTHORIZE
The SYSTEM$GET_PRIVATELINK function in Snowflake provides proxy information necessary for configuring PrivateLink connections, which can protect Snowsight as well as other Snowflake services. PrivateLink enhances security by allowing Snowflake to be accessed via a private connection within a cloud provider’s network, reducing exposure to the public internet.
References:
Which URL provides access to files in Snowflake without authorization?
File URL
Scoped URL
Pre-signed URL
Scoped file URL
A Pre-signed URL provides access to files stored in Snowflake without requiring authorization at the time of access. This feature allows users to generate a URL with a limited validity period that grants temporary access to a file in a secure manner. It's particularly useful for sharing data with external parties or applications without the need for them to authenticate directly with Snowflake.
References:
By default, how long is the standard retention period for Time Travel across all Snowflake accounts?
0 days
1 day
7 days
14 days
By default, the standard retention period for Time Travel in Snowflake is 1 day across all Snowflake accounts. Time Travel enables users to access historical data within this retention window, allowing for point-in-time data analysis and recovery. This feature is a significant aspect of Snowflake's data management capabilities, offering flexibility in handling data changes and accidental deletions.
References:
What are the benefits of the replication feature in Snowflake? (Select TWO).
Disaster recovery
Time Travel
Fail-safe
Database failover and fallback
Data security
The replication feature in Snowflake provides several benefits, with disaster recovery and database failover and fallback being two of the primary advantages. Replication allows for the continuous copying of data from one Snowflake account to another, ensuring that a secondary copy of the data is available in case of outages or disasters. This capability supports disaster recovery strategies by allowing operations to quickly switch to the replicated data in a different account or region. Additionally, it facilitates database failover and fallback procedures, ensuring business continuity and minimizing downtime.
References:
What information does the Query Profile provide?
Graphical representation of the data model
Statistics for each component of the processing plan
Detailed Information about I he database schema
Real-time monitoring of the database operations
The Query Profile in Snowflake provides a graphical representation and statistics for each component of the query's execution plan. This includes details such as the execution time, the number of rows processed, and the amount of data scanned for each operation within the query. The Query Profile is a crucial tool for understanding and optimizing the performance of queries, as it helps identify potential bottlenecks and inefficiencies.
References:
What is the default value in the Snowflake Web Interface (Ul) for auto suspending a Virtual Warehouse?
1 minutes
5 minutes
10 minutes
15 minutes
The default value for auto-suspending a Virtual Warehouse in the Snowflake Web Interface (UI) is 10 minutes. This setting helps manage compute costs by automatically suspending warehouses that are not in use, ensuring that compute resources are efficiently allocated and not wasted on idle warehouses.
References:
What type of function returns one value for each Invocation?
Aggregate
Scalar
Table
Window
Scalar functions in Snowflake (and SQL in general) are designed to return a single value for each invocation. They operate on a single value and return a single result, making them suitable for a wide range of data transformations and calculations within queries.
References:
Which activities are included in the Cloud Services layer? {Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake is responsible for a wide range of services that facilitate the management and use of Snowflake, including:
These services are part of Snowflake's fully managed, cloud-based architecture, which abstracts and automates many of the complexities associated with data warehousing.
References:
What is it called when a customer managed key is combined with a Snowflake managed key to create a composite key for encryption?
Hierarchical key model
Client-side encryption
Tri-secret secure encryption
Key pair authentication
Tri-secret secure encryption is a security model employed by Snowflake that involves combining a customer-managed key with a Snowflake-managed key to create a composite key for encrypting data. This model enhances data security by requiring both the customer-managed key and the Snowflake-managed key to decrypt data, thus ensuring that neither party can access the data independently. It represents a balanced approach to key management, leveraging both customer control and Snowflake's managed services for robust data encryption.
References:
How does a Snowflake stored procedure compare to a User-Defined Function (UDF)?
A single executable statement can call only two stored procedures. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call only one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
A single executable statement can call multiple stored procedures. In contrast, multiple SQL statements can call the same UDFs.
Multiple executable statements can call more than one stored procedure. In contrast, a single SQL statement can call multiple UDFs.
In Snowflake, stored procedures and User-Defined Functions (UDFs) have different invocation patterns within SQL:
The effects of query pruning can be observed by evaluating which statistics? (Select TWO).
Partitions scanned
Partitions total
Bytes scanned
Bytes read from result
Bytes written
Query pruning in Snowflake refers to the optimization technique where the system reduces the amount of data scanned by a query based on the query conditions. This typically involves skipping unnecessary data partitions that do not contribute to the query result. The effectiveness of this technique can be observed through:
Options B, D, and E do not directly relate to observing the effects of query pruning. "Partitions total" shows the total available, not the impact of pruning, while "Bytes read from result" and "Bytes written" relate to output rather than the efficiency of data scanning.References: Snowflake documentation on performance tuning and query optimization techniques, specifically how query pruning affects data access.
What happens when a network policy includes values that appear in both the allowed and blocked IP address list?
Those IP addresses are allowed access to the Snowflake account as Snowflake applies the allowed IP address list first.
Those IP addresses are denied access lei the Snowflake account as Snowflake applies the blocked IP address list first.
Snowflake issues an alert message and adds the duplicate IP address values lo both 'he allowed and blocked IP address lists.
Snowflake issues an error message and adds the duplicate IP address values to both the allowed and blocked IP address list
In Snowflake, when setting up a network policy that specifies both allowed and blocked IP address lists, if an IP address appears in both lists, access from that IP address will be denied. The reason is that Snowflake prioritizes security, and the presence of an IP address in the blocked list indicates it should not be allowed regardless of its presence in the allowed list. This ensures that access controls remain stringent and that any potentially unsafe IP addresses are not inadvertently permitted access.
References:
Which common query problems are identified by the Query Profile? (Select TWO.)
Syntax error
Inefficient pruning
Ambiguous column names
Queries too large to fit in memory
Object does not exist or not authorized
The Query Profile in Snowflake can identify common query problems, including:
The Query Profile helps diagnose these issues by providing detailed execution statistics and visualizations, aiding in query optimization and troubleshooting.
References:
Top of Form
Which Snowflake mechanism is used to limit the number of micro-partitions scanned by a query?
Caching
Cluster depth
Query pruning
Retrieval optimization
Query pruning in Snowflake is the mechanism used to limit the number of micro-partitions scanned by a query. By analyzing the filters and conditions applied in a query, Snowflake can skip over micro-partitions that do not contain relevant data, thereby reducing the amount of data processed and improving query performance. This technique is particularly effective for large datasets and is a key component of Snowflake's performance optimization features.
References:
User1, who has the SYSADMIN role, executed a query on Snowsight. User2, who is in the same Snowflake account, wants to view the result set of the query executed by User1 using the Snowsight query history.
What will happen if User2 tries to access the query history?
If User2 has the sysadmin role they will be able to see the results.
If User2 has the securityadmin role they will be able to see the results.
If User2 has the ACCOUNTADMIN role they will be able to see the results.
User2 will be unable to view the result set of the query executed by User1.
In Snowflake, the query history and the results of queries executed by a user are accessible based on the roles and permissions. If User1 executed a query with the SYSADMIN role, User2 would be able to view the result set of that query executed by User1 only if User2 has the ACCOUNTADMIN role. The ACCOUNTADMIN role has the broadest set of privileges, including the ability to access all aspects of the account's operation, data, and query history, thus enabling User2 to view the results of queries executed by other users.
References:
When unloading data, which file format preserves the data values for floating-point number columns?
Avro
CSV
JSON
Parquet
When unloading data, the Parquet file format is known for its efficiency in preserving the data values for floating-point number columns. Parquet is a columnar storage file format that offers high compression ratios and efficient data encoding schemes. It is especially effective for floating-point data, as it maintains high precision and supports efficient querying and analysis.
References:
A user has semi-structured data to load into Snowflake but is not sure what types of operations will need to be performed on the data. Based on this situation, what type of column does Snowflake recommend be used?
ARRAY
OBJECT
TEXT
VARIANT
When dealing with semi-structured data in Snowflake, and the specific types of operations to be performed on the data are not yet determined, Snowflake recommends using the VARIANT data type. The VARIANT type is highly flexible and capable of storing data in multiple formats, including JSON, AVRO, BSON, and more, within a single column. This flexibility allows users to perform various operations on the data, including querying and manipulation of nested data structures without predefined schemas.
References:
What are characteristics of reader accounts in Snowflake? (Select TWO).
Reader account users cannot add new data to the account.
Reader account users can share data to other reader accounts.
A single reader account can consume data from multiple provider accounts.
Data consumers are responsible for reader account setup and data usage costs.
Reader accounts enable data consumers to access and query data shared by the provider.
Characteristics of reader accounts in Snowflake include:
References:
What is the MINIMUM permission needed to access a file URL from an external stage?
MODIFY
READ
SELECT
USAGE
To access a file URL from an external stage in Snowflake, the minimum permission required is USAGE on the stage object. USAGE permission allows a user to reference the stage in SQL commands, necessary for actions like listing files or loading data from the stage, but does not permit the user to alter or drop the stage.
References:
When referring to User-Defined Function (UDF) names in Snowflake, what does the term overloading mean?
There are multiple SOL UDFs with the same names and the same number of arguments.
There are multiple SQL UDFs with the same names and the same number of argument types.
There are multiple SQL UDFs with the same names but with a different number of arguments or argument types.
There are multiple SQL UDFs with different names but the same number of arguments or argument types.
In Snowflake, overloading refers to the creation of multiple User-Defined Functions (UDFs) with the same name but differing in the number or types of their arguments. This feature allows for more flexible function usage, as Snowflake can differentiate between functions based on the context of their invocation, such as the types or the number of arguments passed. Overloading helps to create more adaptable and readable code, as the same function name can be used for similar operations on different types of data.
References:
Which Snowflake edition offers the highest level of security for organizations that have the strictest requirements?
Standard
Enterprise
Business Critical
Virtual Private Snowflake (VPS)
The Virtual Private Snowflake (VPS) edition offers the highest level of security for organizations with the strictest security requirements. This edition provides a dedicated and isolated instance of Snowflake, including enhanced security features and compliance certifications to meet the needs of highly regulated industries or any organization requiring the utmost in data protection and privacy.
References:
Which command can be used to list all the file formats for which a user has access privileges?
LIST
ALTER FILE FORMAT
DESCRIBE FILE FORMAT
SHOW FILE FORMATS
The command to list all the file formats for which a user has access privileges in Snowflake is SHOW FILE FORMATS. This command provides a list of all file formats defined in the user's current session or specified database/schema, along with details such as the name, type, and creation time of each file format. It is a valuable tool for users to understand and manage the file formats available for data loading and unloading operations.
References:
Which data types optimally store semi-structured data? (Select TWO).
ARRAY
CHARACTER
STRING
VARCHAR
VARIANT
In Snowflake, semi-structured data is optimally stored using specific data types that are designed to handle the flexibility and complexity of such data. The VARIANT data type can store structured and semi-structured data types, including JSON, Avro, ORC, Parquet, or XML, in a single column. The ARRAY data type, on the other hand, is suitable for storing ordered sequences of elements, which can be particularly useful for semi-structured data types like JSON arrays. These data types provide the necessary flexibility to store and query semi-structured data efficiently in Snowflake.
References:
By default, which role has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function?
ACCOUNTADMIN
SECURITYADMIN
SYSADMIN
ORGADMIN
By default, the ACCOUNTADMIN role in Snowflake has access to the SYSTEM$GLOBAL_ACCOUNT_SET_PARAMETER function. This function is used to set global account parameters, impacting the entire Snowflake account's configuration and behavior. The ACCOUNTADMIN role is the highest-level administrative role in Snowflake, granting the necessary privileges to manage account settings and security features, including the use of global account parameters.
References:
For which use cases is running a virtual warehouse required? (Select TWO).
When creating a table
When loading data into a table
When unloading data from a table
When executing a show command
When executing a list command
Running a virtual warehouse is required when loading data into a table and when unloading data from a table because these operations require compute resources that are provided by the virtual warehouse23.
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
The temporary table will take precedence over the permanent table.
The permanent table will take precedence over the temporary table.
An error will say there cannot be two tables with the same name in a schema.
The table that was created most recently will take precedence over the older table.
In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
What does a masking policy consist of in Snowflake?
A single data type, with one or more conditions, and one or more masking functions
A single data type, with only one condition, and only one masking function
Multiple data types, with only one condition, and one or more masking functions
Multiple data types, with one or more conditions, and one or more masking functions
A masking policy in Snowflake consists of a single data type, with one or more conditions, and one or more masking functions. These components define how the data is masked based on the specified conditions3.
What type of query will benefit from the query acceleration service?
Queries without filters or aggregation
Queries with large scans and selective filters
Queries where the GROUP BY has high cardinality
Queries of tables that have search optimization service enabled
The query acceleration service in Snowflake is designed to benefit queries that involve large scans and selective filters. This service can offload portions of the query processing work to shared compute resources, which can handle these types of workloads more efficiently by performing more work in parallel and reducing the wall-clock time spent in scanning and filtering2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
At what level is the MIN_DATA_RETENTION_TIME_IN_DAYS parameter set?
Account
Database
Schema
Table
The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level. This parameter determines the minimum number of days Snowflake retains historical data for Time Travel operations
Which type of loop requires a BREAK statement to stop executing?
FOR
LOOP
REPEAT
WHILE
The LOOP type of loop in Snowflake Scripting does not have a built-in termination condition and requires a BREAK statement to stop executing4.
Which Snowflake function is maintained separately from the data and helps to support features such as Time Travel, Secure Data Sharing, and pruning?
Column compression
Data clustering
Micro-partitioning
Metadata management
Micro-partitioning is a Snowflake function that is maintained separately from the data and supports features such as Time Travel, Secure Data Sharing, and pruning. It allows Snowflake to efficiently manage and query large datasets by organizing them into micro-partitions1.
What is a directory table in Snowflake?
A separate database object that is used to store file-level metadata
An object layered on a stage that is used to store file-level metadata
A database object with grantable privileges for unstructured data tasks
A Snowflake table specifically designed for storing unstructured files
A directory table in Snowflake is an object layered on a stage that is used to store file-level metadata. It is not a separate database object but is conceptually similar to an external table because it stores metadata about the data files in the stage5.
Which data types can be used in Snowflake to store semi-structured data? (Select TWO)
ARRAY
BLOB
CLOB
JSON
VARIANT
Snowflake supports the storage of semi-structured data using the ARRAY and VARIANT data types. The ARRAY data type can directly contain VARIANT, and thus indirectly contain any other data type, including itself. The VARIANT data type can store a value of any other type, including OBJECT and ARRAY, and is often used to represent semi-structured data formats like JSON, Avro, ORC, Parquet, or XML34.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which function unloads data from a relational table to JSON?
TO_OBJECT
TO_JSON
TO_VARIANT
OBJECT CONSTRUCT
The TO_JSON function is used to convert a VARIANT value into a string containing the JSON representation of the value. This function is suitable for unloading data from a relational table to JSON format. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
CREATE STAGE
COPY INTO