25 AWS Architect Associate Mock Exam Questions Free

My post on 25 Cloud Practitioner Mock Exam Questions proved really popular with readers. So I thought I’d put out another one around the Architect Associate course.

If you have the time, after going through these exam questions, check out that post as it has some great tips and techniques on how to approach questions and apply common sense logic to answering them.

I put these exam questions together based on my learnings and experiences taking the exam itself.

So without further ado, here are my 25 AWS Architect Associate mock exam questions totally free of charge.

Mock Exam Questions

Your company is creating a photo sharing app. One of the requirements is that users can upload and store multiple revisions of the same photo. What storage solution would be best placed to meet this requirement?

a) Amazon S3

b) Amazon EFS

c) Amazon ESB

d) RDS with binary blog versioning

Answer – a

Amazon S3 has versioning built in, by default it is not enabled though. I wrote an entire 2 part blog post on S3 Ultimate guide where I give a detailed explanation of how to enable versioning on S3 and much more.

You’re creating an application in your startup that lets users sign up and consume online elearning courses. You want to deliver content on demand but you have requirements for it to be highly scalable and the most cost efficient solution. Select the most appropriate AWS services to meet these requirements. Choose 2.

a) Lambda

b) S3

c) EC2

d) EFS

Answer c & d

You could host files on S3 and use lambda functions wrapped in an API to provide content to users. You would only be charged when you use the service and it would scale.

Which file storage solution would allow you to share data between EC2 instances ?

a) EFS

b) EBS

c) Storage Gateway

d) S3

Answer a

EFS is similar to EBS in that is block storage and therefore suitable for running operating systems or databases. They differ only in that EBS can only be attached to one EC2 instance at a time. EFS drives can be mounted across multiple instances.

You have a situation where your client wants to backup and archive thousands of video files. They may need to be retrieved but it doesn’t matter if the retrieval process takes hours. Which storage solution would be most appropriate on a budget.

a) S3

b) S3 Infrequently Accessed

c) Glacier

d) Glacier Expedited Retrieval

Answer c

Standard Glacier storage would be the most appropriate. The S3 options would costs to much and since the retrieval isn’t time bound, glacier expedited retrieval seems unnecessary.

You want to connect instances in your VPC to services hosted in AWS without using the internet, what solution best fits this scenario?

a) attach a VPC endpoint with routes 0.0.0.0/0

b) use a NAT gateway with routes 0.0.0.0/0

c) use an Internet gateway

d) move the services into your VPC

Answer a

A VPC will provide access to services hosted on AWS without needing to access the internet.

P.s. When I was revising, I found NATs and VPC’s the most difficult part to understand. This could possibly be due to me not coming from a networking background.

You want to scale your SAAS product based on the demand you receive. You can not predict a pattern for spikes in traffic but you need at least 2 instance running at any one time. How would you setup your system to handle spikes in traffic? Choose 2.

a) Put your instances behind an application load balancer

b) Create an auto scaling group with a minimum number of instances set to 2

c) Create 2 EC2 instances and manually add more based on CloudWatch alerts

d) Create 2 EC2 M4 10xLarge instance to have enough processing power to handle peak loads

Answer a & b

A Load balancer will distribute traffic between the instances that sit behind it, while an auto scaling group will add more instances based on demand. I.e. scaling based on CPU load above 90% would trigger a new EC2 instance.

You want to upload massive amounts of files to your S3 bucket. How do you ensure that the performance of the bucket is optimised to handle this volume of data?

a) prefix the file names of the files uploaded to the bucket.

b) postfix the file names of the files uploaded to the bucket.

c) use S3 region replication

d) encrypt the files prior to uploading to S3

Answer a

Prefixing the file names prevents S3 grouping them and ensures the S3 bucket performs as efficiently as it can. This allows S3 to handle parallel requests on the same S3 bucket.

You are creating a video sharing website running on EC2 instances and your site will have a small amount of videos that will be accessed potentially thousands of times from various locations around the globe. What service could you use to ensure that they are served quickly and reduce load on the underlying EBS instance they are hosted on?

a) CloudFront

b) ElastiCache

c) Create multiple EBS volumes

d) Chop the videos into shorter lengths.

Answer a

CloudFront enables you to cache content at various edge locations across the globe. Dramatically reducing the latency from user requesting a resource to it being delivered.

As a solutions architect, you need to store files and create metadata based on those files. This process can take place potentially thousands of times in short amount of time. Which solution would best meet your needs?

a) Upload the files to an EC2 instance which processes them and then push them eventually to s3.

b) Push the files to S3 and run a lambda function triggered by the upload on the file.

c) Upload the files to an EC2 instance which processes them and then store them on the EC2 instances EBS volume.

d) Store the files in a RDS aurora instance as blobs.

Answer b

Storing the files in S3 and then triggering a lambda function will scale massively and will be the most cost efficient solution.

Your company sells temporary backup services to its clients. After 6 months of a clients data being stored on your product. The data is deleted. Which AWS service would best meet your needs in this case?

a) EC2

b) CloudFormation

c) S3 bucket with a life cycle policy to delete files after 6 months

d) S3 bucket with life cycle policy to move files into Glacier after 6 months

Answer c

S3 is extremely versatile for versioning, archival and data removal policies. You can create a policy to delete all content from a bucket that is 6 months or older. AWS handles the rest.

Your company builds complex architectural solutions using AWS for multiple clients. This can take a lot of time and the pattern for it is well established. What service could you use to expedite this process?

a) CloudWatch

b) CloudTrail

c) BeanStalk

d) Cloud Formation

Answer d

Cloud Formation allows you to script the deployment and AWS resources and infrastructure. This means you can create a new complex architectural setup and tear it down simply by running an cloud formation script.

You’re working on a product that processes large video files. When those processing is complete you need to get notified of this. Which tool is most appropriate?

a) AWS CLI

b) AWS SQS

c) AWS SNS

d) AWS IAM

Answer c

Simple notification service (SNS) allows users to send notifications when triggered. This would be the most suitable solution to the problem described above.

Select an appropriate Database technology that would work well with a software product that is regularly changing its underlying schema definitions.

a) DynamoDB

b) Aurora

c) MySQL

d) Oracle

Answer a

DynamoDB is a NoSQL database that lets you store JSON data directly to the database. It is well suited for software products where their schema is not fixed. They also scale extremely well

You have a requirement to backup data stored on you an S3 bucket to a different geographic location to cover disaster recovery scenarios. What option best solves this requirement?

a) Run an EC2 instance with a scheduled task to copy the S3 bucket content to another S3 bucket.

b) Download the buckets content and copy them to data centre located in a different geographic region.

c) Enable Cross-Region replication that will copy the contents of the bucket to another region.

d) Manually copy the content from one bucket to another bucket located in a different region.

Answer c

S3 has cross region replication disabled by default. When enabled it will copy the contents of one bucket to another preconfigured bucket. Bucket versioning must be enabled for this feature to be enabled. You should consider your storage costs will double due to storing the data twice.

You have a lambda function that needs to access an S3 bucket. However it cannot access this resource over the internet. Which AWS service will accommodate this?

a) NAT gateway

b) VPC endpoint

c) Storage Gateway

d) IAM Role

Answer b

A VPC endpoint allows you to privately connect to your VPC. This means you’re not required to use an internet gateway.

Your company has a large social network presence and as a result has access to large amounts of data via api feeds from those social networks. If you wanted to analyse and process them in real time what AWS service would best suit your needs?

a) AWS Kinesis

b) AWS SQS

c) EC2 instance with a Message queue installed

d) There is no service for this scenario

Answer a

AWS Kinesis is specifically designed for this scenario. You can process data feeds in real time.

You want to monitor Lambda activity as you want to tightly control how many invocations of the lambda functions are executed when live. What service would you use to better monitor lambda invocations?

a) CloudTrail

b) CloudWatch

c) LogTrail

d) The Management Console

Answer a

CloudTrail lets you monitor all sorts of

Which of these services could you use to run docker instances out of the box with no extra work required?

a) Elastic beanstalk

b) EC2

c) Lambda

d) RedShift

Answer a

Elastic beanstalk (ebs) provides a convenient scalable way to host docker containers with little to no configuration.

Which EBS volume type is most appropriate for running an EC2 hosted database on?

a) EBS Provisioned IOPS SSD (io1)

b) EBS General Purpose SSD (gp2)*

c) Throughput Optimized HDD (st1)

d) Cold HDD (sc1)

Answer a

EBS Provisioned IOPS SSD (io1) is the most performant EBS option currently available. It will be suitable for running high throughput transactional operations such as hosting a database.

You want to run an EC2 instance but you want to keep the costs as low as possible. When selecting an EBS volume which one is most appropriate for this brief?

a) EBS Provisioned IOPS SSD (io1)

b) EBS General Purpose SSD (gp2)*

c) Throughput Optimized HDD (st1)

d) Cold HDD (sc1)

Answer d

Cold HDD (sc1) is the least performant EBS option currently available. It will be suitable for running low transaction less frequently accessed workloads.

You have 70tb of data that you want to migrate into the cloud, specifically S3 buckets. Using a traditional broadband modem would be to slow. What AWS scenario best solves this problem?

a) Create an amazon storage gateway link and begin transferring data that way.

b) Request an AWS snowball, transfer the data to it and mail it back to amazon so they can upload it directly to their infrastructure.

c) Drive to a datacenter and load the data yourself.

d) Purchase several broadband internet connections and run the upload in parallel.

Answer b

Snowball will let you copy your data to the portable storage solution. You can then ship it back to amazon where they will then connect the snowball directly to their server infrastructure over high speed data link.

When you expose a series of EC2 instances behind a elastic load balancer in a VPC with internet access via a NAT gateway. When downloading updates what is the potentially bottle neck in this configuration?

a) The NAT gateway

b) The EC2 instances

c) The ELB

d) The VPC itself

Answer a

The elastic load balancer would not be a problem, nor would the EC2 instances. The NAT gateway however is limited in its bandwidth. This can not be easily scaled and would ba limitation to be aware of when designing a system such as this one.

Your customer wants to host a website promoting a movie they are releasing. On the night of the movie premier they expect massive traffic. The website is simple and only contains static content. Which AWS services could you use to host this website? Chose 2.

a) S3 (simple storage solution)

b) Route53 pointing to the S3 bucket

c) EC2

d) Route53 pointing to the EC2 instance

Answer a & b

A combination of S3 and Route53 would allow for a static website that can scale to massive levels. This is a simple and cost effective way of hosting a site with high traffic demands.

Leave a Comment