A prompt begins: “You are a senior test manager responsible for risk-based test planning on a payments platform.” Which component is this?
How do tester responsibilities MOSTLY evolve when integrating GenAI into test processes?
What BEST protects sensitive test data at rest and in transit?
An LLM prioritizes tests using likelihood X impact but ranks a trivial tooltip change above a payment failure. What defect does this MOST LIKELY show?
What is a hallucination in LLM outputs?
Which statement BEST describes vision-language models (VLMs)?
A prompt section states: “Web checkout module v3.2; focus on coupon application; existing regression suite IDs T-112—T-150; recent defect ID BUG-431.” Which component is this?
In the context of software testing, which statements (i—v) about foundation, instruction-tuned, and reasoning LLMs are CORRECT?
i. Foundation LLMs are best suited for broad exploratory ideation when test requirements are underspecified.
ii. Instruction-tuned LLMs are strongest at adhering to fixed test case formats (e.g., Gherkin) from clear prompts.
iii. Reasoning LLMs are strongest at multi-step root-cause analysis across logs, defects, and requirements.
iv. Foundation LLMs are optimal for strict policy compliance and template conformance.
v. Instruction-tuned LLMs can follow stepwise reasoning without any additional training or prompting.
Which option BEST differentiates the three prompting techniques?
Which setting can reduce variability by narrowing the sampling distribution during inference?
A tester uploads crafted images that steer the LLM into validating non-existent acceptance criteria. Which attack vector is this?
Your team needs to generate 500 API test cases for a REST API with 50 endpoints. You have documented 10 exemplar test cases that follow your organization's standard format. You want the LLM to generate test cases following the pattern demonstrated in your examples. Which of the following prompting techniques is BEST suited to achieve your goal in this scenario?