A development team is building a serverless application where AWS Lambda functions need to invoke Amazon Bedrock foundation models to generate product descriptions. The security team requires that the Lambda functions follow the principle of least privilege when accessing AWS services.Which approach should the developer implement to grant the Lambda function access to Amazon Bedrock?
A legal technology company is building a generative AI application to draft contract clauses. The application must utilize a repository of 200,000 proprietary legal documents that is updated with new case files every night.The Chief Legal Officer has set the following strict requirements:Attribution: Every generated clause must include a direct citation to the specific case file it was derived from.Freshness: The model must incorporate data from documents added to the repository within the last 24 hours.Accuracy: The solution must minimize hallucinations and strictly adhere to the facts present in the proprietary documents.Which architectural strategy should the GenAI engineer implement to meet these requirements?
A smart-home device manufacturer wants to use a Large Language Model (LLM) to identify usage trends from millions of daily device logs. Each log contains a specific device ID, the user's exact GPS coordinates, and a timestamped list of voice commands. Privacy regulations require that the manufacturer must not share granular location data or linkable user IDs with the analytics model, but the model still needs to understand regional usage patterns (e.g., "users in Northern Europe use heating more often").Which data pipeline architecture effectively prepares this data for the model while maintaining regulatory compliance?
A legal tech company uses Amazon Bedrock to extract specific liability clauses from uploaded contracts. Users have reported instances where the model "hallucinates" clauses that do not exist in the source text.Which mechanisms should the developer implement to verify accuracy and reduce these hallucinations?
An online gaming platform is launching a GenAI-powered support assistant using multiple Foundation Models (FMs) available in Amazon Bedrock (including Amazon Titan and Anthropic Claude). The security team has issued strict requirements for the assistant's input handling:PII Protection: Users must be prevented from submitting email addresses or phone numbers.Toxicity: Hate speech and insults must be blocked immediately.Custom Blocking: A specific list of known "cheat codes" and "exploit keywords" must be blocked from entering the model context.Consistency: These rules must apply uniformly across all used FMs without rewriting application logic for each model.Which solution should you implement to meet these requirements with the LEAST operational overhead?