Which protection group cannot be ratcheted for SafeMode?
A default protection group
Protection groups with hosts or hostgroups
A protection group without a local snapshot schedule
What is SafeMode Ratcheting?: SafeMode is Purity ' s " immutability " feature that prevents snapshots from being deleted, eradicated, or modified, even by an administrator with compromised credentials. Ratcheting is the process of increasing the protection levels (like extending the retention period) for a protection group (pgroup) to ensure even stricter data safety.
The Dependency on Local Snapshots: SafeMode ' s primary function is to protect point-in-time copies of data residing on the array. For a protection group to be " ratcheted " into a SafeMode-protected state, it must have an active Local Snapshot Schedule .
Why Option C is the Constraint: If a protection group does not have a local snapshot schedule, there are no local snapshots being generated for SafeMode to " lock. " SafeMode cannot protect what doesn ' t exist locally. While a pgroup might be used for replication only, SafeMode requires the local scheduling component to be active and configured to apply its immutable retention policies.
Why Option B is incorrect: Protection groups are designed to contain hosts, host groups, or volumes. This is the standard way to group related data for snapshot consistency and has no negative impact on SafeMode eligibility.
Operational Note: When you enable SafeMode on a protection group with a local schedule, the " Erradicate " button for those snapshots is disabled. To " ratchet " the protection, you typically work with Pure Storage Support to ensure the retention settings meet your compliance needs.
Pure Protect //DRaaS is configured with a Business Policy to back up data to AWS. An administrator, with DRaaS Global Admin access, is trying to delete the policy but is unable to do so.
What is restricting the administrator from deleting the policy?
Application groups are currently protected under that policy.
The Business Policy is marked as the Primary Policy.
The administrator also needs DRaaS Cloud Admin access.
In policy-driven data protection and disaster recovery architectures like Pure Protect //DRaaS, a " Business Policy " dictates the critical Service Level Agreements (SLAs) for your environment, such as your Recovery Point Objective (RPO), replication frequency, and retention schedules. These policies are then assigned to " Application Groups, " which act as logical containers for the specific virtual machines being protected and replicated to AWS.
As a fundamental safety mechanism built into the platform to prevent accidental exposure and SLA breaches, the system places a hard dependency lock on actively used policies. An administrator cannot delete a Business Policy if there are still Application Groups actively relying on it for their DR scheduling. To successfully delete the policy, the administrator must first modify all associated Application Groups and assign them to a different Business Policy, or completely remove the protection from those groups.
Here is why the other options are incorrect:
The administrator also needs DRaaS Cloud Admin access (C): The scenario explicitly states the user already has " DRaaS Global Admin access. " In the Pure Protect //DRaaS Role-Based Access Control (RBAC) model, Global Admin is the highest tier of privilege and has full rights to manage and delete policies. A lack of permissions is not the issue here.
The Business Policy is marked as the Primary Policy (B): While a policy might be a default or primary template, the actual hard restriction that prevents deletion in the software is active resource assignment (the Application Groups), not just a " Primary " label.
How are in-progress asynchronous snapshot transfers monitored from the UI?
From the replication target
From the either the replication source or target
From the replication source
According to official Pure Storage documentation regarding Asynchronous Replication management, while replication throughput (bandwidth) can be viewed globally on the Analysis tab, the actual replication status for in-progress snapshot transfers is tracked and monitored on the replication target .
To monitor an in-progress asynchronous transfer from the GUI, a storage administrator must log into the target FlashArray, navigate to Storage - > Protection Groups , and look at the Transfers section within the Protection Group Snapshots panel. This view explicitly details the time the replicated snapshot was created on the source, the time the transfer started, and the current progress of the snapshot being received. If a transfer is currently in-progress, the " Completed " column will remain blank until the snapshot is fully safely written to the target array.
Here is why the other options are incorrect:
From the replication source (C): While the source orchestrates the creation of the snapshot and initiates the data push, the granular transfer completion status and historical transfer logs of the incoming snapshots are tracked on the target ' s Protection Group interface.
From the either the replication source or target (B): Because the specific " Transfers " tracking panel for asynchronous protection group snapshots is located on the receiving end (target), monitoring the granular completion status cannot be done symmetrically from either side in the UI.
An administrator is running commands to verify NVME/TCP connectivity from the hosts to the FlashArray. They use the command ping -M do -s 8972 < ip_addr > from the initiator and it fails.
What should the administrator do to resolve the issue?
Engage support to enable NVME/ TCP services.
Check the MTU of 9000 is set on each hop to the FA.
Run the command from the target.
When configuring NVMe/TCP (or iSCSI) for optimal performance on a Pure Storage FlashArray, configuring Jumbo Frames (an MTU of 9000) end-to-end is a standard best practice.
The command ping -M do -s 8972 < ip_addr > is specifically used to verify Jumbo Frame configuration across the network.
The -M do flag sets the " Do Not Fragment " (DF) bit, meaning the network is not allowed to break the packet into smaller pieces.
The -s 8972 flag sets the ICMP data payload to 8972 bytes. When you add the standard 8-byte ICMP header and the 20-byte IP header, the total packet size equals exactly 9000 bytes.
If this ping command fails, it indicates that somewhere along the network path between the host (initiator) and the FlashArray (target), a switch port, router, or network interface is not configured to support an MTU of 9000. The packet is being dropped because it is too large and cannot be fragmented. The administrator must verify the MTU settings on every network hop (switches, routers, and host NICs) to resolve the issue.
Here is why the other options are incorrect:
Engage support to enable NVME/ TCP services (A): The failure of a Jumbo Frame ping test is a Layer 2/Layer 3 network configuration issue, not an indicator that the NVMe/TCP storage protocol service is disabled on the array.
Run the command from the target (C): While pinging from the FlashArray back to the host is a valid secondary troubleshooting step, it will likely also fail if the network path doesn ' t support Jumbo Frames. The actual resolution is to fix the MTU on the network hops.
A storage administrator is troubleshooting a FlashArray that is critically low on space. They have successfully deleted and eradicated a large volume, but used space keeps increasing.
What is a possible cause?
Space decrease won ' t be seen for 1 day, as the volume is kept for 24 hours as a safeguard.
FlashArray workload is too high, and the reclamation process is not keeping up.
A host is still connected to the volume so space was not released.
Logical vs. Physical Reclamation: When an administrator " Eradicates " a volume, the FlashArray immediately removes the logical reference to that data. However, the physical blocks are not " wiped " instantly. Instead, those blocks are marked as " eligible for reclamation " by Purity’s background Garbage Collection (GC) process.
Workload Prioritization: Purity is designed to prioritize Host I/O (production performance) over background system tasks. If the array is under an extremely high workload (high Load Meter percentage), Purity will automatically throttle the Garbage Collection process to ensure the application latency remains as low as possible.
The " Reclamation Lag " : If the incoming write rate from the hosts (new data being written) exceeds the speed at which the throttled GC process can reclaim space from the eradicated volume, the " Used Space " metric will continue to trend upward. This is a common scenario when arrays are pushed to their performance or capacity limits simultaneously.
Why Option A is incorrect: The 24-hour safeguard applies to Destroyed volumes (the " Pending Eradication " bucket). Once an administrator manually clicks Eradicate , that safeguard is bypassed, and the space should logically be freed. If the space is still not reflecting as " Free, " it is a back-end processing delay, not a timer delay.
Why Option C is incorrect: In the Purity Operating Environment, the array does not require the host to " unmount " or " disconnect " before it can reclaim space. Once the volume is destroyed and eradicated on the array side, those blocks are gone from the array ' s perspective, regardless of the host ' s state (though the host will likely experience I/O errors).
A storage administrator is configuring a new volume and wants to provision 500GB. If the administrator accidentally selects PB, what will happen?
The volume will be created and space will immediately be used.
The volume will be created but a warning will be displayed.
The volume will not be created and a warning will be displayed.
Pure Storage FlashArrays utilize Thin Provisioning as a core, always-on architectural principle. When a volume is created, the " size " assigned to it is merely a logical limit (a quota) presented to the host; no physical back-end flash capacity is allocated or " pinned " at the time of creation.
Because of this architecture, Purity allows administrators to create volumes that are significantly larger than the actual physical capacity of the array (this is known as over-provisioning). If an administrator accidentally selects PB (Petabytes) instead of GB, the Purity GUI will allow the volume to be created because it is a logical operation that doesn ' t immediately consume 1PB of physical flash. However, Purity includes a built-in safety check: if the requested logical size is exceptionally large or exceeds the current physical capacity of the array, the GUI will present a warning or confirmation prompt to ensure the administrator is aware of the massive logical size being provisioned before finalizing the change.
Here is why the other options are incorrect:
The volume will be created and space will immediately be used (A): This describes " Thick Provisioning, " which Pure Storage does not use. Space is only consumed on a FlashArray when unique data is actually written by the host and processed by the deduplication and compression engines.
The volume will not be created and a warning will be displayed (C): Purity does not strictly forbid over-provisioning. While it warns the user to prevent human error, it does not block the creation of the volume, as over-provisioning is a standard practice in thin-provisioned environments.
What is the purpose of a Protocol Endpoint volume?
It allows for volumes of the same name within host groups.
It serves as a mount point for vVols.
It is required to set Host Protocol.
In a VMware vSphere environment utilizing Virtual Volumes (vVols), a Protocol Endpoint (PE) acts as a crucial logical proxy or I/O access point between the ESXi hosts and the storage array.
Unlike traditional VMFS datastores where the host mounts a massive LUN and places all VM files inside it, vVols map individual virtual machine disks directly to native volumes on the FlashArray. Because a single ESXi host could potentially need to communicate with thousands of individual vVol volumes, it would be extremely inefficient to map every single one directly to the host. Instead, the ESXi host mounts the Protocol Endpoint , and the storage array uses this PE to dynamically route the I/O to the correct underlying vVol. On a Pure Storage FlashArray, creating and connecting a PE volume to your ESXi host groups is a mandatory prerequisite for setting up a vVol datastore.
Here is why the other options are incorrect:
It allows for volumes of the same name within host groups (A): Purity OS requires all volume names across the entire FlashArray to be completely unique, regardless of which host group they are connected to or whether a Protocol Endpoint is in use.
It is required to set Host Protocol (C): The host communication protocol (such as iSCSI, Fibre Channel, or NVMe-oF) is determined by the physical host bus adapters (HBAs), network interface cards (NICs), and the configuration of the Host object in Purity, not by the creation of a volume type like a PE.
How is SAN Time measured?
Average time, measured in milliseconds, required to transfer data between the initiator and the array.
Average time, measured in milliseconds, that an IO request spends waiting to synchronize to the peer array.
Average time, measured in milliseconds, that an I/O request spends in the array waiting to be served.
Understanding Total Latency: In a FlashArray environment, total latency as seen by the host application is the sum of several components. Pure Storage breaks this down into Array Time and SAN Time to help administrators pinpoint where performance bottlenecks exist.
SAN Time Definition: SAN Time represents the latency introduced by the network infrastructure between the host (initiator) and the FlashArray (target). This includes the time spent traveling across Fibre Channel or Ethernet switches, cables, and host bus adapters (HBAs). It is calculated by taking the total round-trip time measured by the host and subtracting the time the FlashArray spent processing the I/O.
Metric Breakdown: * Array Time: The time the FlashArray takes to process the I/O once it hits the front-end ports (Option C describes internal array time).
SAN Time: The transit time for the request to reach the array and the response to return to the host (Option A).
Wait Time: In ActiveCluster environments, there is also " Mirror Latency, " which is the time spent synchronizing data to a peer array (Option B).
Troubleshooting Value: If a user reports high latency but the FlashArray GUI shows very low Array Time , the administrator can look at the SAN Time metric. A high SAN Time indicates an issue with the fabric, such as a failing SFP, a congested switch port, or oversubscribed ISLs (Inter-Switch Links).
An on-premises mediator has been deployed. When a pod is stretched and replicating, it is observed that the pod is utilizing the Pure1 Cloud Mediator and not the on-premises mediator.
Why is the on-premises mediator NOT being used by this new pod?
The pod needs to be configured to use the on-premises mediator.
The on-premises mediator is NOT available for the ActiveCluster FlashArrays.
No A record exists for an on-premises mediator.
ActiveCluster Mediator Logic: In an ActiveCluster setup, the Mediator is a lightweight component (an " arbiter " ) that resides at a third site (failure domain). Its job is to break the tie in a " split-brain " scenario (when arrays lose connectivity with each other) to determine which array stays online.
Default Behavior: By default, every FlashArray is configured to use the Pure1 Cloud Mediator . This is a managed service provided by Pure Storage that requires no local infrastructure other than internet access from the array controllers.
On-Premises Mediator Deployment: Organizations that cannot use the Cloud Mediator (due to " dark site " security requirements or lack of reliable internet) can deploy the Pure Storage On-Premises Mediator as a small OVF/VM template.
Configuration at the Pod Level: Simply deploying the On-Premises Mediator VM and connecting the arrays to it at the array level is not enough for existing or new pods to switch automatically. In Purity, the mediator preference is a per-pod attribute .
When a pod is created or stretched, it inherits the default (Cloud).
To use the local mediator, the administrator must explicitly configure the pod to point to the On-Premises Mediator’s IP or DNS name. This is done via the CLI using purepod setattr --mediator < address > < pod-name > or through the Pod settings in the GUI.
Why Option C is incorrect: While an A record is necessary for DNS resolution, the prompt implies the mediator is already " deployed " and available. The most common reason it isn ' t being used is simply that the pod hasn ' t been told to look there instead of the default Cloud option.
During a test failover using ActiveDR, what content will be presented to the target pod?
The content from the last periodic refresh
The content from the last real fail-over
The content from the undo pod
ActiveDR is Pure Storage’s continuous, near-sync replication solution. It differs fundamentally from standard asynchronous replication because it uses a continuous stream of data rather than snapshot-based " periodic refreshes " (which eliminates Option A).
When you perform a test failover in ActiveDR, you do so by promoting the target pod . The target pod becomes writable, allowing your hosts and applications to run against the replicated data without disrupting the ongoing continuous replication from the source array in the background.
When the test is completed, you demote the target pod. To ensure that the data generated during your test failover isn ' t accidentally lost forever, ActiveDR automatically creates an undo pod at the exact moment of demotion.
If you need to resume that exact test failover scenario or recover the test data, you can re-promote the target pod and instruct ActiveDR to present the content from the undo pod . This unique mechanism allows storage administrators to seamlessly non-disruptively test, pause, and resume DR environments without affecting production protection.
Volume space has increased on a FlashArray and shared space decreased by the same amount.
What does this indicate?
Space reclamation fell behind.
Hosts were removed.
Volumes were deleted.
Understanding Space Reporting: To understand this behavior, you have to look at how Purity calculates capacity. Pure Storage uses a data reduction engine where data is deduplicated and compressed.
Volume Space vs. Shared Space:
Volume Space: This represents the unique data belonging to a specific volume that is not shared with any other volume via deduplication or snapshots.
Shared Space: This represents the data that is common across multiple volumes or snapshots. If you have two volumes that are clones of each other, most of that data is " Shared. "
The " Shift " Mechanism: When a volume is deleted (and potentially eradicated), the data it once shared with other volumes no longer needs to be " shared. "
Imagine Volume A and Volume B share 100GB of data. That 100GB is accounted for in Shared Space .
If you delete Volume B, that 100GB of data is now only referenced by Volume A.
Consequently, that 100GB is moved from the Shared Space bucket into Volume A ' s Volume Space bucket.
Net Result: The total physical space used on the array remains the same initially, but the accounting shifts. You see a decrease in Shared Space and an identical increase in the Volume Space of the remaining volumes that held those deduplication references.
What is unified storage for Pure?
FlashArray runs both NFS and SMB protocols.
FlashArray runs both Block and File level protocols.
FlashArray runs both iSCSI and Fibre Channel (FC) protocols.
Definition of Unified Storage: In the storage industry, " Unified Storage " refers to a platform that can natively serve both Block-level storage (accessed via protocols like Fibre Channel, iSCSI, or NVMe-oF) and File-level storage (accessed via protocols like NFS or SMB) from a single pool of capacity and under a single management interface.
Pure Storage Implementation (FA File): Pure Storage achieved unified storage on the FlashArray through the introduction of Purity//FA File Services . Unlike traditional unified storage that often required a " gateway " or separate hardware " heads, " Pure’s implementation runs natively on the FlashArray controllers.
Shared Resources: On a unified FlashArray, the global storage pool is shared between volumes (Block) and file systems (File). All of Pure’s core data services—such as deduplication, compression, and SafeMode snapshots —apply globally across both block and file data.
Protocol Diversity: While Option A mentions NFS and SMB, those are strictly File protocols. Option C mentions iSCSI and FC, which are strictly Block protocols. Only Option B correctly identifies the combination of Block and File , which defines the " Unified " architecture of the FlashArray.
RFC2307 enables cross-protocol support for which two protocols?
NFS, S3
S3, SMB
NFS, SMB
Understanding RFC2307: RFC2307 is an extension to the LDAP (Lightweight Directory Access Protocol) schema that allows for the storage of Unix-style information (POSIX attributes) within a directory service, most commonly Microsoft Active Directory . These attributes include things like uidNumber (User ID), gidNumber (Group ID), and login shells.
The Cross-Protocol Challenge: In a Unified Storage environment where the same data needs to be accessed by both Windows clients (using the SMB protocol) and Linux/Unix clients (using the NFS protocol), the storage array must be able to map a Windows Security Identifier (SID) to a Unix UID/GID.
How Pure Uses It: When an administrator enables RFC2307 support in Purity//FA File Services, the FlashArray can query Active Directory to retrieve these POSIX attributes. This creates a 1:1 mapping between the Windows user and the Unix identity.
The Benefit: This mapping ensures that a user can create a file via SMB and another user (or the same user on a different system) can access or modify that same file via NFS while maintaining consistent permission enforcement and ownership records. Without this (or a similar mapping service like NIS or local files), cross-protocol access often results in permission " Mapping " errors or files being owned by " nobody. "
Which command provides the negotiated port speed of an ethernet port?
pureport list
purenetwork eth list -- all
purehw list -- all -- type eth
On a Pure Storage FlashArray, Ethernet ports operate at both a physical hardware layer and a logical network configuration layer. If you need to verify the actual physical negotiated port speed of an Ethernet port (for example, verifying if a 25GbE port negotiated down to 10GbE due to switch configurations or cable limitations), you must query the hardware layer directly.
The command purehw list --all --type eth interacts directly with the physical NIC hardware components to report their true link status, health, and dynamically negotiated hardware link speed.
Here is why the other options are incorrect:
purenetwork eth list -- all (B): The purenetwork command suite is primarily focused on the logical Layer 2/Layer 3 networking stack. It is used to configure and list IP addresses, subnet masks, MTU sizes (Jumbo Frames), and routing, rather than focusing on the physical hardware negotiation details of the NIC itself.
pureport list (A): The pureport command suite is specifically used for managing and viewing storage protocol target ports. An administrator would use this to list the array ' s Fibre Channel WWNs or iSCSI IQNs to configure host zoning or initiator connections, not to verify Ethernet link negotiation speeds.
FlashArray sent Alert 51 - Protection Group Replication Delayed.
What steps should be taken?
Verify if there are any other open alerts, check if any other replication jobs are in progress, and verify current replication bandwidth.
Check for other open alerts, verify if the replication cabling is correct, and check if Protection Group was disabled.
Temporarily disconnect replication for troubleshooting, verify the size of the snapshot, and check for port errors in the FlashArray GUI.
Understanding Alert 51: On a Pure Storage FlashArray, Alert 51 signifies that a Protection Group ' s replication is lagging behind its scheduled completion time. This does not necessarily mean the connection is " down, " but rather that the volume of data being sent is exceeding the available throughput or is being queued behind other tasks.
The Triage Process:
Open Alerts: You must check for related alerts (like Alert 20 for " Replication Connection Down " ) to determine if the delay is caused by a total link failure or just congestion.
Replication Jobs in Progress: Because FlashArray uses a specialized engine to manage replication, having multiple large snapshots from different Protection Groups replicating simultaneously can saturate the " replication pipe. " Checking active jobs helps determine if there is a scheduling " traffic jam. "
Replication Bandwidth: Comparing the current outgoing replication throughput against the historical average or the physical limit of the replication ports helps identify if the delay is due to a sudden increase in Data Change Rate (churn) or a reduction in network performance.
Why Option B is incorrect: If a Protection Group were disabled, replication wouldn ' t be " delayed " —it would be stopped, which triggers a different alert state. Cabling issues usually result in " Connection Down " alerts rather than just " Delayed " alerts.
Why Option C is incorrect: Disconnecting replication is a destructive troubleshooting step that will only increase the lag and RPO. You should always analyze the existing data flow before breaking the connection.
An X20R4 array containing 10 x 4.5TB DirectFlash Modules is running out of capacity. The customer found a data pack scheduled for a FlashArray//C array and has inserted it into the array. The customer is unable to admit the new capacity.
What is a possible reason for this?
The new capacity is SAS, which is NOT compatible with an X20R4.
The new capacity is QLC, which is NOT compatible with an X20R4.
The new capacity is TLC, which is NOT compatible with an X20R4.
Hardware Architecture (X vs. C): Pure Storage maintains two primary FlashArray lines: the FlashArray//X (performance-oriented) and the FlashArray//C (capacity-oriented).
Flash Types (TLC vs. QLC):
FlashArray//X (like the X20R4 mentioned in the question) uses TLC (Triple-Level Cell) DirectFlash Modules (DFMs). TLC provides high performance and high endurance, which is necessary for latency-sensitive mission-critical workloads.
FlashArray//C uses QLC (Quad-Level Cell) DirectFlash Modules. QLC provides significantly higher density at a lower cost per GB, but it has different performance and endurance profiles compared to TLC.
Compatibility Constraints: Purity//FA is designed to manage specific flash geometries. QLC modules are not compatible with the //X series arrays. The controller logic and software-defined flash management in an X20R4 are tuned for the voltage and timing characteristics of TLC flash.
The Admission Process: When a new data pack is inserted, the array performs a " handshake. " If the controller detects a module type that it is not hardware-qualified to support (in this case, QLC in an //X chassis), it will refuse to admit the capacity to prevent system instability or data integrity issues.
Why Option A is incorrect: Modern FlashArrays (since the //M series) use NVMe over a PCIe backplane for DirectFlash Modules. Pure moved away from SAS (Serial Attached SCSI) for its primary data drives years ago to eliminate the performance bottlenecks associated with the SAS protocol.
Why Option C is incorrect: An X20R4 uses TLC flash. If the data pack were TLC, it would likely be compatible (provided it met the minimum module count and Purity version requirements).
How should an administrator configure initiator-to-target connections for zones with multiple initiators?
Multiple initiator to multiple target zone sets
Single initiator to multiple target zone sets
Single target to multiple initiator zone sets
Zoning Best Practices: In a Fibre Channel SAN environment, zoning is used to partition the fabric to ensure that initiators (hosts) can only see the targets (FlashArray ports) they are intended to communicate with.
The " Single Initiator " Rule: Pure Storage, following industry-standard SAN best practices (and Cisco/Brocade recommendations), strongly advises using Single Initiator Zoning . This means each zone should contain exactly one initiator (HBA port) and one or more targets (FlashArray ports) .
Why Single Initiator to Multiple Targets (Option B)?:
Isolation: This prevents " Initiator-to-Initiator " communication. If multiple initiators are in the same zone, they may attempt to communicate with or probe each other (Registered State Change Notifications - RSCNs), which can cause host instability, driver timeouts, or discovery issues.
Troubleshooting: It simplifies troubleshooting. If a port is experiencing CRC errors or flapping, the impact is isolated to that specific host ' s zone rather than affecting a broad " group " zone.
Efficiency: When a target port changes state (e.g., during a controller reboot), the switch sends an RSCN. In a single-initiator zone, only that specific host is notified. In a multi-initiator zone, every host in the zone is interrupted to process the notification, even if they aren ' t using the port that changed.
Target Selection: While the zone should have only one initiator, it can (and should) include multiple target ports from the FlashArray (usually one port from CT0 and one from CT1 for the same fabric) to provide path redundancy and allow the host ' s MPIO software to manage failover.
A storage administrator needs to determine what actions were taken on the array by the previous shift and is only able to access the FlashArray via CLI.
Which command provides that information?
pureaudit list -- puremessage
pureaudit list
puremessage list
Understanding the Audit Log: In Purity, accountability and security are maintained through the Audit Log . This log captures every administrative action taken on the array, whether through the GUI, CLI, or REST API. It records who performed the action, what the action was (e.g., volume creation, host deletion), and when it occurred.
The CLI Command: The command pureaudit list is the specific CLI tool used to display these logs. By default, it lists events in chronological order, making it the perfect tool for an administrator to review " shift change " activities.
Command Options: * pureaudit list can be filtered with flags like --user to see actions by a specific admin, or --start-time and --end-time to narrow down the " previous shift " window.
Why Option C is incorrect: puremessage (accessed via puremessage list) is used to view Alerts and Notifications generated by the system (e.g., a failed drive or a high-temperature warning). While it tells you what the array did, it does not track what users did.
Why Option A is incorrect: This is not a valid Purity command syntax. Purity does not use double-dashes to " pipe " or combine independent commands like pureaudit and puremessage in that manner.
What is the proper configuration method to connect a volume to multiple hosts?
Connect the volume to a host group.
Connect a volume group to the host.
Connect the volume to each individual host.
In Pure Storage Purity OS, the absolute best practice and proper configuration method for sharing a single volume across multiple hosts—such as a VMware ESXi cluster or a Microsoft Windows Server Failover Cluster (WSFC)—is to connect the volume to a Host Group .
When you create a Host Group, you add the individual Host objects (which contain the WWPNs, IQNs, or NQNs) into that group. When a volume is then connected to the Host Group, Purity automatically ensures that the volume is presented to every host in that group using the exact same LUN ID . Consistent LUN IDs across all nodes in a cluster are a strict requirement for clustered file systems like VMFS and Cluster Shared Volumes (CSV) to function correctly and prevent data corruption.
Here is why the other options are incorrect:
Connect the volume to each individual host (C): This is known as creating " private connections. " If you manually connect a shared volume to multiple hosts individually, Purity might assign a different LUN ID to the volume for each host. Inconsistent LUN IDs will cause clustered operating systems to fail to recognize the disk as a shared resource. Private connections should only be used for boot LUNs or standalone standalone servers.
Connect a volume group to the host (B): In Purity, a " Volume Group " is a logical container used for applying consistent snapshot policies, replication schedules, or ActiveCluster configurations to a set of related volumes (like a database and its log files). Volume groups are not used for host presentation or access control.
Which storage protocol is best suited for a Hyper-V cluster in an ethernet-switched environment?
NVMe/FC
NVMe/RoCE
iSCSI
Protocol Compatibility: While NVMe-over-Fabrics (NVMe-oF) is the future of high-performance storage, support for it within specific hypervisor ecosystems varies. In the context of Microsoft Hyper-V , especially in older or standard Ethernet-switched environments, iSCSI remains the most mature, widely supported, and native protocol for block storage.
Ethernet Constraints: The question specifies an Ethernet-switched environment . This immediately rules out NVMe/FC (NVMe over Fibre Channel), as that requires dedicated Fibre Channel switching infrastructure and HBAs, not standard Ethernet.
NVMe/RoCE vs. iSCSI: While NVMe/RoCE (RDMA over Converged Ethernet) runs on Ethernet, it requires specialized NICs (RDMA-capable) and a specific switch configuration (PFC/LFC) to ensure a lossless fabric. Hyper-V has specific requirements for RoCE (often tied to SMB Direct for File services), but for general block storage volumes in a standard Ethernet environment, iSCSI is the " best suited " due to its ease of deployment, native Windows initiator support, and lack of requirement for specialized lossless hardware.
Pure Storage Best Practices: Pure Storage FlashArrays provide industry-leading iSCSI performance. When using iSCSI with Hyper-V, Pure recommends using the native Windows MPIO (Multi-Path I/O) with the " Least Queue Depth " policy to ensure optimal load balancing across the Ethernet fabric.
What should an administrator configure when setting up device-level access control in an NVMe/TCP network?
VLANs
NQN
LACP
In any NVMe-based storage fabric (including NVMe/TCP, NVMe/FC, and NVMe/RoCE), the standard method for identifying endpoints and enforcing device-level access control is the NQN (NVMe Qualified Name) .
The NQN serves the exact same purpose in the NVMe protocol as an IQN (iSCSI Qualified Name) does in an iSCSI environment, or a WWPN (World Wide Port Name) does in a Fibre Channel environment. It is a unique identifier assigned to both the host (initiator) and the storage array (target subsystem). When setting up access control on a Pure Storage FlashArray, the storage administrator must capture the Host NQN from the operating system and configure a Host object on the array with that specific NQN. This ensures that only the authorized host can discover, connect to, and access its provisioned NVMe namespaces (volumes).
Here is why the other options are incorrect:
VLANs (A): Virtual LANs are used for network-level isolation and segmentation at Layer 2 of the OSI model. While you might use a VLAN to separate your storage traffic from your management traffic, it is a network security measure, not a device-level access control mechanism for the storage protocol itself.
LACP (C): Link Aggregation Control Protocol (LACP) is a network protocol used to bundle multiple physical network links into a single logical link for redundancy and increased bandwidth. It has nothing to do with storage access control or mapping volumes to hosts.
What command must an administrator run to use newly installed DirectFlash Modules (DFM)?
pureadmin -- admit-drive
purearray admit drive
puredrive admit
When new DirectFlash Modules (DFMs) or data packs are physically inserted into a Pure Storage FlashArray, the Purity operating environment detects the new hardware but places the drives in an " unadmitted " state. This safety mechanism prevents the accidental incorporation of drives and allows the system to verify the firmware and health of the modules before they are actively used to store data.
To formally accept these drives into the system ' s storage pool so their capacity can be utilized, the administrator must execute the CLI command puredrive admit. Once this command is run, the drive status transitions from " unadmitted " to " healthy, " and the array ' s usable capacity expands accordingly.
Here is why the other options are incorrect:
pureadmin -- admit-drive (A): This is syntactically incorrect. The pureadmin command suite is used for managing administrator accounts, API tokens, and directory services, not for hardware or drive management.
purearray admit drive (B): This is also incorrect syntax. While purearray is used for array-wide settings and status (like renaming the array or checking space), specific drive-level operations are exclusively handled by the puredrive command structure.
Copyright © 2014-2026 Certensure. All Rights Reserved