Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

AWS Certified Solutions Architect - Associate Practice Test 1, Exams of Computer Science

AWS Certified Solutions Architect - Associate Practice Test 1 Exams set 2024

Typology: Exams

2023/2024

Available from 12/19/2023

Deva1599
Deva1599 🇮🇳

5

(1)

5 documents

1 / 60

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Q: 1
Amazon Redshift is being used by a business to perform analytics and
generate customer reports. The corporation has recently obtained an
additional 50 terabytes of demographic data on its customers, which is stored
in CSV files on Amazon S3. The organization requires a system that can
efficiently merge this data and visualize the findings. What recommendation
should an architect make to satisfy these requirements?
1. (Correct)
Use Amazon Redshift Spectrum to directly query the data in
Amazon S3 and connect that data to the existing data in
Amazon Redshift. Use Amazon QuickSight to create the
visualizations.
2. Use Amazon Athena to query the data in Amazon S3 and
Amazon QuickSight to combine the data from Athena with the
existing data in Amazon Redshift to create the visualizations.
3. Increase the size of the Amazon Redshift cluster and load the
data from Amazon S3. Use Amazon EMR notebooks to query
the data and create the visualizations in Amazon Redshift.
4. Export the data from the Amazon Redshift cluster to Apache
Parquet files in Amazon S3. Use Amazon Elasticsearch Service
(Amazon ES) to query the data and use Kibana to visualize the
results.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c

Partial preview of the text

Download AWS Certified Solutions Architect - Associate Practice Test 1 and more Exams Computer Science in PDF only on Docsity!

Q: 1

Amazon Redshift is being used by a business to perform analytics and generate customer reports. The corporation has recently obtained an additional 50 terabytes of demographic data on its customers, which is stored in CSV files on Amazon S3. The organization requires a system that can efficiently merge this data and visualize the findings. What recommendation should an architect make to satisfy these requirements?

  1. (Correct) Use Amazon Redshift Spectrum to directly query the data in Amazon S3 and connect that data to the existing data in Amazon Redshift. Use Amazon QuickSight to create the visualizations.
  2. Use Amazon Athena to query the data in Amazon S3 and Amazon QuickSight to combine the data from Athena with the existing data in Amazon Redshift to create the visualizations.
  3. Increase the size of the Amazon Redshift cluster and load the data from Amazon S3. Use Amazon EMR notebooks to query the data and create the visualizations in Amazon Redshift.
  4. Export the data from the Amazon Redshift cluster to Apache Parquet files in Amazon S3. Use Amazon Elasticsearch Service (Amazon ES) to query the data and use Kibana to visualize the results.

Q: 2

A company stores 200 GB of data on Amazon S3 every month. At the end of each month, the company needs to analyze this data to calculate the number of items sold in each sales territory during the previous month. Which analytics approach is the most cost-effective option for the company? Explanation: check the second use case: https://aws.amazon.com/glue/?whats-new-cards.sort- by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc "You can use the AWS Glue Data Catalog to quickly discover and search multiple AWS datasets without moving the data. Once the data is cataloged, it is immediately available for search and query with Amazon Athena, Amazon EMR, and Amazon Redshift Spectrum"

  1. Create an Amazon Elasticsearch Service (Amazon ES) cluster. Query the data in Amazon ES. Visualize the data using Kibana.
  2. Create a table in the AWS Glue Data Catalog. Query the data in Amazon S3 using Amazon Athena. Visualize the data in Amazon QuickSight.
  3. Create an Amazon EMR cluster. Query the data using Amazon EMR and store the results in Amazon S3. Visualize the data in Amazon QuickSight.
  4. Create an Amazon Redshift cluster. Query the data in Amazon Redshift and upload the results to Amazon S3. Visualize the data in Amazon QuickSight.

Q: 4

A company intends to push a TCP-based application to the company's virtual private cloud (VPC). The application is available to the public through an unsupported TCP port via a physical device in the company's data center. This public endpoint has a latency of less than 3 milliseconds and can handle up to 3 million requests per second. The enterprise needs the new public endpoint in AWS to achieve the same performance. Which solution architecture approach should be recommended to meet this requirement? Explanation: https://aws.amazon.com/elasticloadbalancing/network-load-balancer Network Load Balancer operates at the connection level (Layer 4), routing connections to targets (Amazon EC2 instances, microservices, and containers) within Amazon VPC, based on IP protocol data. The Network Load Balancer is ideal for load balancing TCP and UDP traffic and can handle millions of requests per second with extremely low latency. Network Load Balancer is optimized to handle sudden and volatile traffic patterns and uses a single static IP address per Availability Zone. It integrates with other popular AWS services such as Auto Scaling, Amazon EC2 Container Service (ECS), Amazon CloudFormation, and AWS Certificate Manager (ACM).

Deploy a network load balancer (NLB). Configure the NLB to be publicly accessible through the TCP port required by the application.

  1. Deploy an application load balancer (ALB). Configure the ALB to be publicly accessible through the TCP port required by the application.
  2. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an application load balancer as the origin.
  3. Deploy an Amazon API Gateway API configured with the TCP port required by the application. Configure AWS Lambda functions with provisioned concurrency to handle the requests.

Q: 5

An enterprise has two VPCs within the same AWS account that are located in the us-west-2 region. The company needs to allow network communication between these VPCs. Approximately 500 GB of data is transferred between the VPCs each month. Which approach is the most cost effective for connecting these VPCs? Explanation:

  • C A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC communication. - TGWs are used in a site-to-site VPN connection linking a remote network (e.g. on-prem) to one or more AWS VPCs B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter- VPC communication. - Site-to-Site VPN tunnel is to connect a VPC to a remote network (e.g. on-prem) C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC communication - Correct D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for inter-VPC communication. - Direct Connect is to connect a VPC to a remote network (e.g. on-prem)
    1. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use Transit Gateway to communicate between the VPCs.
    2. Implement an AWS site-to-site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for communication between the VPCs.
    3. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for communication between the VPCs.
    4. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect connection for communication between the VPCs.

Q: 7

How can the organization satisfy the requirements of transferring its storage system to AWS, minimizing bandwidth costs, and ensuring quick and free data retrieval, given that its on-premises data center has reached its storage limit? Explanation: correct answer is A. Aurora uses a single reader endpoint for all replica nodes, and custom endpoints can be created for specific workloads. Therefore, the three nodes can share the same custom endpoint to serve the reports. Option B is incorrect because creating a 3-node cluster clone does not make sense in this context, and it does not address the requirement of using a single endpoint for all replica nodes. Option C is also incorrect because it refers to RDS, not Aurora. In RDS, instance endpoints exist, and applications need to be updated with connection strings to access the database. Option D is incorrect because using the same endpoint for all six replica nodes would not meet the requirement of using a single endpoint for all nodes.

  1. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
  2. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
  3. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
  4. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point- in-time snapshots of the data to Amazon S3.

Q: 8

A newly acquired company is required to establish its infrastructure on AWS and transfer multiple applications to the cloud within a month of being purchased. The transfer of each application involves transmitting approximately 50 TB of data. Once the transfer is complete, the company and its parent organization require a secure network connection with constant throughput between their data centers and applications. A solutions architect needs to ensure that the data transfer occurs only once and the network connection is maintained. What solution can meet these requirements? Explanation: the correct solution that meets the requirements is C, which involves using AWS Direct Connect to establish a secure network connection between the company's data centers and its applications. AWS Direct Connect is a dedicated network connection that does not use the public internet, providing better security compared to Site-to-Site VPN, which is transmitted over the internet. Direct Connect also offers better connection options and performance, reducing the likelihood of latency issues and bottlenecks. Since the cost is not mentioned in the given scenario, Direct Connect can be considered the better option over VPN. According to AWS, "While in transit, your network traffic remains on the AWS global network and never touches the public internet. This reduces the chance of hitting bottlenecks or unexpected increases in latency." [source: https://aws.amazon.com/directconnect/]

  1. AWS Direct Connect for both the initial transfer and ongoing connectivity.
  2. AWS Site-to-Site VPN for both the initial transfer and ongoing connectivity.
  3. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity.
  4. AWS Snowball for the initial transfer and AWS Site-to-Site VPN for ongoing connectivity.

Q: 10

The objective is to identify a service that would enable a business to deploy file storage for an upcoming project that can be mounted as a disk on on- premises desktop computers. To access this storage, the file server must first authenticate users against an Active Directory domain. Which service can facilitate the deployment of storage as a drive on workstations while also allowing for Active Directory authentication? Which service allows Active Directory users to deploy storage on their workstations as a drive? Explanation: A. Glacier for data archive. B. For large file transfers from On-Prem to VPC. DataSync agent SW is preinstalled in AWS Snowcone for offline data transfer to VPC. C. For offline large data transfer to VPC. D. Hybrid Cloud Storage Svc that facilitates the ongoing update of copied data (data to be transferred cannot be large amount) between On-Prem and an AWS Storage Svc.

  1. Amazon S3 Glacier
  2. AWS DataSync
  3. AWS Snowball Edge
  4. AWS Storage Gateway

Q: 11

An organisation with an on-premises application moves to AWS to increase application flexibility and availability. The current design makes heavy use of a Microsoft SQL Server database. The company wants to investigate other database solutions and, if necessary, migrate the database engines. The development team runs a full copy of the production database every four hours to create a test database. During this period, users experience delays. Which database should a solution architect suggest as a replacement?ete copy of the production database every four hours in order to create a test database. Users will encounter delay during this time period. What database should a solution architect propose as a replacement? Explanation: Answer D Snapshot is taken by the secondary db instance. "The I/O suspension typically lasts about one minute. You can avoid the I/O suspension if the source DB instance is a Multi-AZ deployment, because in that case the snapshot is taken from the secondary DB instance." https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

  1. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore from mysqldump for the test database.
  2. Use Amazon Aurora with Multi-AZ Aurora Replicas and restore snapshots from Amazon RDS for the test database.
  3. Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas, and use the standby instance for the test database.
  4. Use Amazon RDS for SQL Server with a Multi-AZ deployment and read replicas, and restore snapshots from RDS for the test database.

Q: 13

A business has launched a mobile multiplayer game. Real-time monitoring of players' latitude and longitude positions is necessary for the game, which requires data storage capable of quick updates and location retrieval. Currently, the game stores location data on an Amazon RDS for PostgreSQL DB instance with read replicas, but during high usage times, the database is unable to handle the speed required for reading and writing changes. The game's user base is rapidly growing. What should a solutions architect do to optimize the data tier's performance? Explanation: Correct option is D. https://aws.amazon.com/elasticache/redis/ "Amazon ElastiCache for Redis offers purpose-built in-memory data structures and operators to manage real-time geospatial data at scale and speed. You can use ElastiCache for Redis to add location-based features such as drive time, drive distance, and points of interests to your applications."

  1. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
  2. Migrate from Amazon RDS to Amazon Elasticsearch Service (Amazon ES) with Kibana.
  3. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
  4. Deploy an Amazon ElastiCache for the Redis cluster in front of the existing DB instance. Modify the game to use Redis. single Q: 14 For a company requiring a secure connection between their on-premises infrastructure and AWS, with low bandwidth and limited traffic capacity, the most cost-effective method for establishing such a sort of connection? Explanation: option D AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). AWS Client VPN enables you to securely connect users to AWS or on-premises networks. https://aws.amazon.com/vpn/faqs/#:~:text=AWS%20VPN%20is%20comprised%20of,AWS%20or%20on %2Dpremises%20networks.
  5. Implement a client VPN.
  6. Implement AWS Direct Connect.
  7. Implement a bastion host on Amazon EC2.

Implement an AWS Site-to-Site VPN connection. single Q: 15 To ensure that a business's web-based application can handle times of heavy demand, a solutions architect must guarantee that the text document storage component can scale to meet the application's demand at all times. The online application will operate on Amazon EC2 instances distributed across several Availability Zones and provide access to a collection of over 900 TB of text content. However, the corporation is concerned about the total cost of the solution. Which storage system best satisfies these criteria in terms of cost- effectiveness?

  1. Amazon Elastic Block Store (Amazon EBS)
  2. Amazon Elastic File System (Amazon EFS)
  3. Amazon Elasticsearch Service (Amazon ES)
  4. Amazon S

Q: 17

The development team requires a website that is accessible to other teams, containing HTML, CSS, client-side JavaScript and graphics. What is the most cost-effective form of website hosting for this purpose? Explanation: Option B Static vs Dynamic Website : In Static Websites, Web pages are returned by the server which are prebuilt. They use simple languages such as HTML, CSS, or JavaScript. There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast. There is no interaction with databases. Also, they are less costly as the host does not need to support server-side processing with different languages. In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user’s demand. These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server. So, they are slower than static websites but updates and interaction with databases are possible.

  1. Containerize the website and host it in AWS Fargate.
  2. Create an Amazon S3 bucket and host the website there.
  3. Deploy a web server on an Amazon EC2 instance to host the website.
  4. Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework. single Q: 18 A business utilizes Amazon Redshift to power its data warehouse and is seeking recommendations from a solutions architect to ensure the long-term viability of its data in case of component failure. Explanation: B is the answer Does Amazon Redshift support Multi-AZ Deployments? Currently, Amazon Redshift only supports Single-Region deployments. To set up a disaster recovery (DR) configuration, you can enable a cross-Region snapshot copy on your cluster. This will replicate all snapshots from your cluster to another AWS Region. In the event of a DR event, the snapshots in the replica Region can be restored to create a new cluster. Amazon Redshift also supports cross-Region data sharing, where a consumer cluster can access live data in a producer cluster in another region. This is supported only with Amazon Redshift Serverless and RA3.
  5. Enable concurrency scaling.

Enable cross-Region snapshots.

  1. Increase the data retention period.
  1. Deploy Amazon Redshift in Multi-AZ. single Q: 19 A business provides its customers with an API that automates tax calculations based on item pricing but experiences delayed response times due to increased query volumes during the Christmas season. To address this issue, a solutions architect needs to design a scalable and elastic system. What is the role of the solutions architect in achieving this? Explanation: A. This isn't a scalable and elastic option. B. Sounds about right, Api Gateway is scalable, and elastic, same as Lambda. C. How is this elastic? We need an ASG. D. It doesn't have elasticity or scalability.
  2. Provide an API hosted on an Amazon EC2 instance. The EC instance performs the required computations when the API request is made.

Design a REST API using Amazon API Gateway that accepts the item names. API Gateway passes item names to AWS Lambda for tax computations.

  1. Create an Application Load Balancer that has two Amazon EC instances behind it. The EC2 instances will compute the tax on the received item names.
  2. Design a REST API using Amazon API Gateway that connects with an API hosted on an Amazon EC2 instance. API Gateway accepts and passes the item names to the EC2 instance for tax computations.

Q: 21

A solutions architect is designing a VPC architecture with multiple subnets, including six subnets that will be deployed across two Availability Zones. The subnets are categorized as public, private, and database-specific, and access to the database must be restricted to Amazon EC2 instances running on private subnets. What solution meets these requirements? Explanation: Answer C is correct Security groups are stateful. All inbound traffic is blocked by default. If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again. You cannot block specific IP addresses using Security groups (instead use Network Access Control Lists).

  1. Create a now route table that excludes the route to the public subnets' CIDR blocks. Associate the route table to the database subnets.
  2. Create a security group that denies ingress from the security group used by instances in the public subnets. Attach the security group to an Amazon RDS DB instance.
  3. Create a security group that allows ingress from the security group used by instances in the private subnets. Attach the security group to an Amazon RDS DB instance.
  4. Create a new peering connection between the public subnets and the private subnets. Create a different peering connection between the private subnets and the database subnets.

Q: 22

A business is implementing a web gateway and aims to restrict public access to the application's online component only. To achieve this, the VPC has been created with two public subnets and two private subnets. The application will be hosted on multiple Amazon EC2 instances managed through an Auto Scaling group, and SSL termination must be delegated to a separate EC instance. What steps should a solutions architect take to ensure compliance with these requirements? Explanation: option C is the answer https://aws.amazon.com/elasticloadbalancing/application-load-balancer/: "Application Load Balancer simplifies and improves the security of your application, by ensuring that the latest SSL/TLS ciphers and protocols are used at all times." So that would mean SSL termination; A and B are out Placing the application load balancers are to be at the public subnet so D is out.

  1. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
  2. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the public subnets and associate it with the Application Load Balancer.
  3. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.
  4. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in the private subnets and associate it with the Application Load Balancer.