Provisioning LocalStack AWS Resources in Spring Boot Tests

Last Updated:  July 16, 2024 | Published: July 16, 2024

When writing integration tests for Spring Boot applications that interact with AWS services like S3, DynamoDB, SNS, SQS, and etc, you're most likely already using LocalStack to simulate the AWS environment.

Although LocalStack provides an emulated AWS environment, we're still required to provision the specific resources needed for testing our application. In this article, we'll explore how we can automate resource provisioning using Initialization Hooks in our LocalStack container.

In addition to looking at how Initialization Hooks can be used with Testcontainers during integration testing, we'll also explore their usage with Docker Compose for local development setups.

We'll first take a look at a basic example of provisioning an S3 bucket, and then we'll tackle a more complex scenario involving the creation of an SNS topic and an SQS queue, along with setting up a pub/sub relationship between them by subscribing the created queue with the SNS topic.

Solution: LocalStack Initialization Hooks

So what are Initialization Hooks anyway?

In simple terms, Initialization Hooks (or “init hooks”) are shell scripts that are executed by LocalStack at specific stages during the container lifecycle. For our use case of provisioning resources, we'll focus on the READY stage, which signifies that the LocalStack container is ready to accept requests.

Shell scripts to be executed can be mounted in the /etc/localstack/init/ready.d directory within the LocalStack container and are executed in alphanumerical order. This is particularly useful when setting up resources required for testing or local development, such as creating S3 buckets, DynamoDB tables, SNS topics, SQS queues, or any other supported AWS service.

We can use the awslocal command inside the shell scripts, which is a wrapper around the aws CLI that points to the LocalStack endpoints.

This setup ensures that the required AWS services are provisioned before any tests are executed.

Why Use Initialization Hooks Over Testcontainers execInContainer()?

Testcontainers exposes an execInContainer() method, which allows us to execute commands inside a running container. You might be wondering, why don't we just use this method to provision the required AWS resources by executing our awslocal commands directly in the LocalStack container?

Here are a few reasons why using initialization hooks is a better approach:

  • Readability: By extracting the resource provisioning logic into separate shell scripts, our test code becomes cleaner and more focused. The test class is only responsible for starting the LocalStack container and mounting the scripts, making it easier to understand at a glance.

  • Reusability: Init scripts can be reused across multiple test classes which require the same resources to be provisioned by simply mounting the created scripts into the container. With the execInContainer() method, we'd have to duplicate the provisioning logic in each of our test classes. Additionally, the created scripts can also be mounted when starting a LocalStack container via Docker Compose for local development.

  • Maintainability: If there is any update required, we only need to change the content of our existing init scripts. This is much easier than hunting down every place where execInContainer() is used and updating the commands.

  • Flexibility: Init scripts provide us with the full power of bash scripting. We can use variables, logs, loops, conditionals, and other constructs to create more dynamic provisioning logic. With the execInContainer() method, we're limited to executing simple, one-off commands.

With the above points, I think we can agree that initialization hooks offer a cleaner and a more powerful approach for provisioning AWS resources inside our LocalStack container.

Provisioning an S3 Bucket for LocalStack

Let's dive into a concrete example to understand how initialization hooks work. We'll start with a simple case of provisioning an S3 bucket for our tests.

First, we create a bash script named init-s3-bucket.sh in our src/test/resources directory with the following content:

The above script uses the awslocal command to create an S3 bucket named behl-rieck-bucket. We also add a couple of echo statements to print out a success message and the name of the executed script.

Next, in our integration test class, we'll configure and start a LocalStack container. At the time of this writing, the latest version of the LocalStack image is 3.4. We'll be using this version in our integration test class:

We use the withCopyFileToContainer() method to mount our init-s3-bucket.sh script into the LocalStack container's /etc/localstack/init/ready.d directory. The 0744 flag ensures that the script has the necessary execute permissions.

We also configure a wait strategy using the waitingFor() method to wait for the log Executed init-s3-bucket.sh to be printed, as defined in our init script. This ensures that our script has run and provisioned the required S3 bucket before any of our tests execute.

If we check the logs of the started LocalStack container, we'll see the output of our init script:

We can see that the S3 bucket behl-rieck-bucket was successfully created, and our init script was executed as expected. We can now use this S3 bucket in our integration tests and be confident about its existence.

Provisioning an AWS SNS topic and an SQS queue

Now that we've seen a basic example, let's take a look at a more complex scenario involving the creation of an SNS topic, an SQS queue, and setting up a subscription between them.

We'll start by creating three separate shell scripts for each of these tasks in our src/test/resources directory.

First, let's create a init-sns-topic.sh script to provision an SNS topic:

This script creates an SNS topic named article-published using the awslocal command. We also add a couple of echo statements to print out a success message and the name of the executed script to signify successful topic creation.

Next, we'll create a init-sqs-queue.sh script to provision an SQS queue:

The above script creates an SQS queue named dispatch-email-notifications. Again, we echo a success message and the name of the executed script to signify successful queue creation.

Finally, in order to establish a pub/sub relationship between the two resources, we'll create a subscribe-sqs-to-sns.sh script to subscribe the SQS queue to the created SNS topic:

This script sets up a subscription between the SNS topic and the SQS queue using their respective ARNs. The --protocol flag specifies that we're subscribing an SQS queue, and the --notification-endpoint flag provides the ARN of the queue. We use the default LocalStack region of us-east-1 and the default AWS account-id 000000000000 in our script.

While it's possible to provision and configure all the resources in a single script, separating them into individual scripts offers benefits of modularity, reusability, and flexibility. This approach allows for easier maintenance and independent execution in different testing scenarios.

With our init scripts for provisioning required AWS resources ready, let's configure our integration test class to use them:

Just like our previous example, we mount our init scripts into the container's /etc/localstack/init/ready.d directory, enable the SNSand SQS services, and configure a wait strategy to ensure that the subscribe-sqs-to-sns.sh script has executed before running our tests by waiting for the log statement Executed subscribe-sqs-to-sns.sh to be printed.

Checking the logs of the started LocalStack container, we can confirm that our resources were provisioned successfully:

The logs confirm that both of our SNS topic and SQS queue were created, and the subscription was set up as expected. We can now confidently write integration test cases that require this setup to be configured.

Share Init Hooks with Local Docker Compose Setup

In addition to using init hooks with Testcontainers for integration testing, we can also leverage them for local development setups using Docker Compose.

The approach remains the same: we create shell scripts and mount them into the /etc/localstack/init/ready.d directory of our LocalStack container using the volumes directive.

For our Docker Compose setup, we create a localstack directory at the root of our project that contains the necessary init scripts to execute. It's also worth noting that the init scripts created in the src/test/resources directory can be reused in the Docker Compose file as well if they fit the local development setup requirements.

Here's an example docker-compose.yml file that demonstrates this setup:

In this snippet, we define the localstack service and use the volumes directive to mount the init scripts from the localstack directory into the LocalStack container's /etc/localstack/init/ready.d directory. This ensures that the scripts are available and executed when the container is ready to accept requests.

The backend-application service represents our Spring Boot application, which is configured to connect to the LocalStack S3 service using Spring Cloud AWS. The depends_on directive ensures that the localstack container is started before the backend-applicationservice, guaranteeing that the required AWS resources are provisioned when the application starts.

Before running the docker-compose command, we need to ensure that our init scripts have the necessary executable permissions. We can achieve this by running the following command in the terminal:

Now we can build and start our containers using the following commands:

When the containers are started, LocalStack will execute the init scripts, which in turn will provision the required AWS resources. With this setup, our Spring Boot application container will use the LocalStack container for all interactions with AWS cloud for local development.

Conclusion

In this article, we explored how LocalStack Initialization Hooks can be used to automate the provisioning of AWS resources for integration testing and local development setups.

We saw how init hooks provide a clean and reusable way to set up resources compared to using Testcontainer's execInContainer()method.

We looked at a simple example of provisioning an S3 bucket and a more complex example involving the creation of an SNS topic, an SQS queue, and setting up a subscription between them.

By leveraging LocalStack init hooks and separating our provisioning logic into separate shell scripts, we're able to create a clean, modular, and reusable setup for testing our application's interaction with the required AWS resources. I hope you find this approach useful for your own projects. Let me know your thoughts or questions in the comments.

Jofyul testing,

Hardik Singh Behl
Github | LinkedIn | Twitter

 

>