When writing integration tests for Spring Boot applications that interact with AWS services like S3, DynamoDB, SNS, SQS, and etc, you're most likely already using LocalStack to simulate the AWS environment.
Although LocalStack provides an emulated AWS environment, we're still required to provision the specific resources needed for testing our application. In this article, we'll explore how we can automate resource provisioning using Initialization Hooks in our LocalStack container.
In addition to looking at how Initialization Hooks can be used with Testcontainers during integration testing, we'll also explore their usage with Docker Compose for local development setups.
We'll first take a look at a basic example of provisioning an S3 bucket, and then we'll tackle a more complex scenario involving the creation of an SNS topic and an SQS queue, along with setting up a pub/sub relationship between them by subscribing the created queue with the SNS topic.
Solution: LocalStack Initialization Hooks
So what are Initialization Hooks anyway?
In simple terms, Initialization Hooks (or “init hooks”) are shell scripts that are executed by LocalStack at specific stages during the container lifecycle. For our use case of provisioning resources, we'll focus on the READY
stage, which signifies that the LocalStack container is ready to accept requests.
Shell scripts to be executed can be mounted in the /etc/localstack/init/ready.d
directory within the LocalStack container and are executed in alphanumerical order. This is particularly useful when setting up resources required for testing or local development, such as creating S3 buckets, DynamoDB tables, SNS topics, SQS queues, or any other supported AWS service.
We can use the awslocal
command inside the shell scripts, which is a wrapper around the aws
CLI that points to the LocalStack endpoints.
This setup ensures that the required AWS services are provisioned before any tests are executed.
Why Use Initialization Hooks Over Testcontainers execInContainer()
?
Testcontainers exposes an execInContainer()
method, which allows us to execute commands inside a running container. You might be wondering, why don't we just use this method to provision the required AWS resources by executing our awslocal
commands directly in the LocalStack container?
Here are a few reasons why using initialization hooks is a better approach:
-
Readability: By extracting the resource provisioning logic into separate shell scripts, our test code becomes cleaner and more focused. The test class is only responsible for starting the LocalStack container and mounting the scripts, making it easier to understand at a glance.
-
Reusability: Init scripts can be reused across multiple test classes which require the same resources to be provisioned by simply mounting the created scripts into the container. With the
execInContainer()
method, we'd have to duplicate the provisioning logic in each of our test classes. Additionally, the created scripts can also be mounted when starting a LocalStack container via Docker Compose for local development. -
Maintainability: If there is any update required, we only need to change the content of our existing init scripts. This is much easier than hunting down every place where
execInContainer()
is used and updating the commands. -
Flexibility: Init scripts provide us with the full power of bash scripting. We can use variables, logs, loops, conditionals, and other constructs to create more dynamic provisioning logic. With the
execInContainer()
method, we're limited to executing simple, one-off commands.
With the above points, I think we can agree that initialization hooks offer a cleaner and a more powerful approach for provisioning AWS resources inside our LocalStack container.
Provisioning an S3 Bucket for LocalStack
Let's dive into a concrete example to understand how initialization hooks work. We'll start with a simple case of provisioning an S3 bucket for our tests.
First, we create a bash script named init-s3-bucket.sh
in our src/test/resources
directory with the following content:
1 2 3 4 5 6 7 | #!/bin/bash bucket_name="behl-rieck-bucket" awslocal s3api create-bucket --bucket $bucket_name echo "S3 bucket '$bucket_name' created successfully" echo "Executed init-s3-bucket.sh" |
The above script uses the awslocal
command to create an S3 bucket named behl-rieck-bucket
. We also add a couple of echo statements to print out a success message and the name of the executed script.
Next, in our integration test class, we'll configure and start a LocalStack container. At the time of this writing, the latest version of the LocalStack image is 3.4
. We'll be using this version in our integration test class:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | @SpringBootTest class StorageIT { private static final LocalStackContainer localStackContainer; static { localStackContainer = new LocalStackContainer(DockerImageName.parse("localstack/localstack:3.4")) .withServices(Service.S3) .withCopyFileToContainer(MountableFile.forClasspathResource("init-s3-bucket.sh", 0744), "/etc/localstack/init/ready.d/init-s3-bucket.sh") .waitingFor(Wait.forLogMessage(".*Executed init-s3-bucket.sh.*", 1)); localStackContainer.start(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { // add required configuration properties } // test cases } |
We use the withCopyFileToContainer()
method to mount our init-s3-bucket.sh
script into the LocalStack container's /etc/localstack/init/ready.d
directory. The 0744
flag ensures that the script has the necessary execute permissions.
We also configure a wait strategy using the waitingFor()
method to wait for the log Executed init-s3-bucket.sh
to be printed, as defined in our init script. This ensures that our script has run and provisioned the required S3 bucket before any of our tests execute.
If we check the logs of the started LocalStack container, we'll see the output of our init script:
1 2 3 4 5 6 7 8 9 10 11 12 13 | LocalStack version: 3.4.0 LocalStack Docker container id: 9672388f6d24 LocalStack build date: 2024-04-25 LocalStack build git hash: 6f971ac81 2024-05-22T06:33:29.538 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2024-05-22T06:33:29.538 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) Ready. 2024-05-22T06:33:31.471 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.CreateBucket => 200 { "Location": "/behl-rieck-bucket" } S3 bucket 'behl-rieck-bucket' created successfully Executed init-s3-bucket.sh |
We can see that the S3 bucket behl-rieck-bucket
was successfully created, and our init script was executed as expected. We can now use this S3 bucket in our integration tests and be confident about its existence.
Provisioning an AWS SNS topic and an SQS queue
Now that we've seen a basic example, let's take a look at a more complex scenario involving the creation of an SNS topic, an SQS queue, and setting up a subscription between them.
We'll start by creating three separate shell scripts for each of these tasks in our src/test/resources
directory.
First, let's create a init-sns-topic.sh
script to provision an SNS topic:
1 2 3 4 5 6 7 | #!/bin/bash topic_name="article-published" awslocal sns create-topic --name $topic_name echo "SNS topic '$topic_name' created successfully" echo "Executed init-sns-topic.sh" |
This script creates an SNS topic named article-published
using the awslocal
command. We also add a couple of echo statements to print out a success message and the name of the executed script to signify successful topic creation.
Next, we'll create a init-sqs-queue.sh
script to provision an SQS queue:
1 2 3 4 5 6 7 | #!/bin/bash queue_name="dispatch-email-notifications" awslocal sqs create-queue --queue-name $queue_name echo "SQS queue '$queue_name' created successfully" echo "Executed init-sqs-queue.sh" |
The above script creates an SQS queue named dispatch-email-notifications
. Again, we echo a success message and the name of the executed script to signify successful queue creation.
Finally, in order to establish a pub/sub relationship between the two resources, we'll create a subscribe-sqs-to-sns.sh
script to subscribe the SQS queue to the created SNS topic:
1 2 3 4 5 6 7 8 | #!/bin/bash topic_name="article-published" queue_name="dispatch-email-notifications" awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:$topic_name" --protocol sqs --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:$queue_name" echo "Subscribed SQS queue '$queue_name' to SNS topic '$topic_name' successfully" echo "Executed subscribe-sqs-to-sns.sh" |
This script sets up a subscription between the SNS topic and the SQS queue using their respective ARNs. The --protocol
flag specifies that we're subscribing an SQS queue, and the --notification-endpoint
flag provides the ARN of the queue. We use the default LocalStack region of us-east-1
and the default AWS account-id 000000000000
in our script.
While it's possible to provision and configure all the resources in a single script, separating them into individual scripts offers benefits of modularity, reusability, and flexibility. This approach allows for easier maintenance and independent execution in different testing scenarios.
With our init scripts for provisioning required AWS resources ready, let's configure our integration test class to use them:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | @SpringBootTest class PubSubIT { private static final LocalStackContainer localStackContainer; static { localStackContainer = new LocalStackContainer(DockerImageName.parse("localstack/localstack:3.3")) .withServices(Service.SNS, Service.SQS) .withCopyFileToContainer(MountableFile.forClasspathResource("init-sns-topic.sh", 0744), "/etc/localstack/init/ready.d/init-sns-topic.sh") .withCopyFileToContainer(MountableFile.forClasspathResource("init-sqs-queue.sh", 0744), "/etc/localstack/init/ready.d/init-sqs-queue.sh") .withCopyFileToContainer(MountableFile.forClasspathResource("subscribe-sqs-to-sns.sh", 0744), "/etc/localstack/init/ready.d/subscribe-sqs-to-sns.sh") .waitingFor(Wait.forLogMessage(".*Executed subscribe-sqs-to-sns.sh.*", 1)); localStackContainer.start(); } @DynamicPropertySource static void properties(DynamicPropertyRegistry registry) { // add required configuration properties } // test cases } |
Just like our previous example, we mount our init scripts into the container's /etc/localstack/init/ready.d
directory, enable the SNS
and SQS
services, and configure a wait strategy to ensure that the subscribe-sqs-to-sns.sh
script has executed before running our tests by waiting for the log statement Executed subscribe-sqs-to-sns.sh
to be printed.
Checking the logs of the started LocalStack container, we can confirm that our resources were provisioned successfully:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | LocalStack version: 3.4.0 LocalStack Docker container id: 8aca627a7140 LocalStack build date: 2024-04-25 LocalStack build git hash: 6f971ac81 2024-05-22T06:46:03.911 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) 2024-05-22T06:46:03.911 INFO --- [-functhread4] hypercorn.error : Running on https://0.0.0.0:4566 (CTRL + C to quit) Ready. 2024-05-22T06:46:05.919 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sns.CreateTopic => 200 { "TopicArn": "arn:aws:sns:us-east-1:000000000000:article-published" } SNS topic 'article-published' created successfully Executed init-sns-topic.sh 2024-05-22T06:46:06.429 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sqs.CreateQueue => 200 { "QueueUrl": "http://sqs.us-east-1.localhost:4566/000000000000/dispatch-email-notifications" } SQS queue 'dispatch-email-notifications' created successfully Executed init-sqs-queue.sh 2024-05-22T06:46:06.660 INFO --- [ asgi_gw_0] localstack.request.aws : AWS sns.Subscribe => 200 { "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:article-published:dfc00974-5035-43c9-963c-0309692fba15" } Subscribed SQS queue 'dispatch-email-notifications' to SNS topic 'article-published' successfully Executed subscribe-sqs-to-sns.sh |
The logs confirm that both of our SNS topic and SQS queue were created, and the subscription was set up as expected. We can now confidently write integration test cases that require this setup to be configured.
Share Init Hooks with Local Docker Compose Setup
In addition to using init hooks with Testcontainers for integration testing, we can also leverage them for local development setups using Docker Compose.
The approach remains the same: we create shell scripts and mount them into the /etc/localstack/init/ready.d
directory of our LocalStack container using the volumes
directive.
For our Docker Compose setup, we create a localstack
directory at the root of our project that contains the necessary init scripts to execute. It's also worth noting that the init scripts created in the src/test/resources
directory can be reused in the Docker Compose file as well if they fit the local development setup requirements.
Here's an example docker-compose.yml
file that demonstrates this setup:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | services: localstack: container_name: localstack image: localstack/localstack:3.4 ports: - 4566:4566 environment: - SERVICES=s3 volumes: - ./localstack/init-s3-bucket.sh:/etc/localstack/init/ready.d/init-s3-bucket.sh networks: - de.rieckpil backend-application: container_name: backend-application build: context: ./ dockerfile: Dockerfile ports: - 8080:8080 depends_on: - localstack environment: spring.cloud.aws.s3.endpoint: 'http://localstack:4566' spring.cloud.aws.s3.path-style-access-enabled: true spring.cloud.aws.credentials.access-key: test spring.cloud.aws.credentials.secret-key: test spring.cloud.aws.s3.region: 'us-east-1' # bucket name as configured in the init script de.rieckpil.aws.s3.bucket-name: 'behl-rieck-bucket' networks: - de.rieckpil networks: de.rieckpil: |
In this snippet, we define the localstack
service and use the volumes
directive to mount the init scripts from the localstack
directory into the LocalStack container's /etc/localstack/init/ready.d
directory. This ensures that the scripts are available and executed when the container is ready to accept requests.
The backend-application
service represents our Spring Boot application, which is configured to connect to the LocalStack S3 service using Spring Cloud AWS. The depends_on
directive ensures that the localstack
container is started before the backend-application
service, guaranteeing that the required AWS resources are provisioned when the application starts.
Before running the docker-compose
command, we need to ensure that our init scripts have the necessary executable permissions. We can achieve this by running the following command in the terminal:
1 | chmod 0744 localstack/* |
Now we can build and start our containers using the following commands:
1 2 | docker compose build docker compose up -d |
When the containers are started, LocalStack will execute the init scripts, which in turn will provision the required AWS resources. With this setup, our Spring Boot application container will use the LocalStack container for all interactions with AWS cloud for local development.
Conclusion
In this article, we explored how LocalStack Initialization Hooks can be used to automate the provisioning of AWS resources for integration testing and local development setups.
We saw how init hooks provide a clean and reusable way to set up resources compared to using Testcontainer's execInContainer()
method.
We looked at a simple example of provisioning an S3 bucket and a more complex example involving the creation of an SNS topic, an SQS queue, and setting up a subscription between them.
By leveraging LocalStack init hooks and separating our provisioning logic into separate shell scripts, we're able to create a clean, modular, and reusable setup for testing our application's interaction with the required AWS resources. I hope you find this approach useful for your own projects. Let me know your thoughts or questions in the comments.
Jofyul testing,
Hardik Singh Behl
Github | LinkedIn | Twitter