Lock @Scheduled Tasks With ShedLock and Spring Boot

Last Updated:  November 3, 2021 | Published: January 12, 2021

As soon as you scale out your Spring Boot application (run with multiple instances) to e.g. increase throughput or availability, you have to ensure your application is ready for this architecture. Some parts of an application require tweaks before they fit for such an architecture. The use of @Scheduled tasks is a candidate for this. Most of the time, you only want this execution to happen in one instance and not in parallel. With this blog post, you'll learn how ShedLock can be used to only execute a scheduled task once for a Spring Boot application.

@Scheduled Tasks in a Scaled-Out Environment

A lot of Spring Boot applications use the @Scheduled annotation to execute tasks regularly. Starting from simple reporting jobs every evening, over cleanup jobs, to synchronization mechanisms, the variety of use cases is huge.

As long as our application is running with one instance, there is no problem as the execution happens only once. But as soon as our application is deployed to a load-balanced environment where multiple instances of the same Spring Boot application are running in parallel, our scheduled jobs are executed in parallel.

In the case of reporting or synchronization, we might want to execute this only once for the whole application. By default, every instance would execute the scheduled task, regardless of whether or not any other instance is already running it. This might result in inconsistent data or duplicated actions.

Spring doesn't provide a solution for running @Scheduled tasks on only one instance at a time out-of-the-box. This is where ShedLock comes into play as it solves this problem.

How ShedLock Ensures to Only Run a Job Once

ShedLock is a distributed lock for scheduled tasks.

It ensures a task is only executed once at the same time. Once the first Spring Boot instance acquires the lock for a scheduled task, all other instances will skip the task execution. As soon as the next task scheduling happens, all nodes will try to get the lock again.

ShedLock stores information about each scheduled job using persistent storage (so-called LockProvider) that all nodes connect to. There are multiple implementations for this LockProvider (e.g. for RDBMS, MongoDB, DynamoDB, Etcd, …) and we'll pick PostgreSQL as an example.

The database table that ShedLock uses internally to manage the locks is straightforward, as it only has four columns:

  • name : A unique name for the scheduled task
  • lock_until: How long the current execution is locked
  • locked_at: The timestamp a node acquired the current lock
  • locked_by: An identifier for the node that acquired the current lock

ShedLock creates an entry for every scheduled task when we run the task for the first time. From this point on the database row (one row for each job) is always present and will only be updated (not deleted and re-created).

How ShedLocks Locks a Scheduled Task

The actual locking of a scheduled task happens by setting the lock_until column to a date in the future.

As soon as a task is scheduled for execution, all application instances try to update the database row for this task. They are only able to lock the task if the task is currently not running (meaning lock_until <= now()).

The node that is able to update the columns for lock_until, locked_at, locked_by has the lock for this execution period and sets lock_until to now() + lockAtMostFor (e.g. 30minutes):

All other nodes fail to acquire the lock because they'll try to update the row for the job where lock_until <= now(). No row will be updated because the lock was already acquired by one instance and this instance set lock_until to a date in the future.

As soon as the task finishes, ShedLock updates the database row and sets lock_until to the current timestamp. There is one exception where ShedLock won't use the current timestamp, which we'll discover in the next section.

With the updated lock_until all nodes are eligible to run the next task execution:

In case the task doesn't finish (e.g. the node crashes or there is an unexpected delay), we get a new task execution after lockAtMostFor. As we'll see in the upcoming sections, we have to provide a lockAtMostFor attribute for all our tasks. This acts as a safety net to avoid deadlocks when a node dies and hence is unable to release the lock.

Lock Short Running Tasks With ShedLock

For short-running tasks, we can configure a lock that lasts for at least X. Without such a configuration, we could get multiple executions of a task if the clock difference between our nodes is greater than the job's execution time.

Let's see how the locking works for short-running tasks.

The procedure for acquiring the lock is the same compared to the already described scenario. What's different is the unlock phase. Instead of setting lock_until to now(), ShedLock sets it to locked_at + lockAtLeastFor whenever the task execution is faster than lockAtLeastFor.

Let's use an example to understand this better. For this purpose, let's assume our application executes a short-running task every minute.

Once this task finishes, ShedLock would set lock_until to now(). If we have a clock difference (which is hard to avoid in a distributed system) between our instances another node might pick up the execution again if the task execution is extremely fast.

To avoid such a scenario, we set lockAtLeastFor as part of our job definition, to block the next execution for at least the specified period.

ShedLock will then set lock_until to at least locked_at + lockAtLeastFor when unlocking the job.

First example (lockAtLeastFor=30s, really fast execution):

  • The job starts at 8:00:00.000
  • The job finishes at 8:00:00.450
  • When unlocking this job, ShedLock sets lock_until to 8:00:30.000 and not to now()

Second example (lockAtLeastFor=30s, slow execution):

  • The job starts at 8:00:00.000
  • The job finishes at 8:00:31.500
  • When unlocking this job, ShedLock sets lock_until to 8:00:31.500 (now()) because the execution took longer than our configure lockAtLeastFor

Spring Boot Project Setup

We're integrating ShedLock with a Spring Boot application that uses two Spring Boot Starters: Web and Data JPA.

Furthermore, our application connects to a PostgreSQL database and uses Flyway for database schema migrations.

The important parts of our pom.xml are the following:

ShedLock also comes with a Micronaut integration and can also be used without any framework.

Creating the ShedLock Table With Flyway

The README of ShedLock contains copy-and-paste DDL statements for multiple database vendors. As we are using PostgreSQL, we pick the corresponding statement to create ShedLock's internal table shedlock.

We create a dedicated Flyway migration file for this statement and store it inside src/main/resources/db/migration/V001__INIT_SHEDLOCK_TABLE.sql:

That's everything we need setup-wise for our database.

Shedlock's internal LockProvider also works with other underlying storage systems. We aren't limited to relational databases and can also use e.g. MongoDB, DynamoDB, Hazelcast, Redis, Etcd, etc.

Spring Boot Configuration Setup for ShedLock

As a first step, we have to enable scheduling and ShedLock's Spring integration for our Spring Boot application.

ShedLock then expects a Spring Bean of type LockProvider as part of our ApplicationContext.

For our relational database setup, we make use of the JdbcTemplateLockProvider and configure it using the auto-configuredDataSource:

While enabling ShedLock's Spring integration (@EnableSchedulerLock) we have to specify defaultLockAtMostFor. This is attribute acts as our fallback configuration for locks where we don't specify lockAtMostFor explicitly.

With this configuration in place, we can start adding locks to our scheduled tasks.

Adding a Lock to a Scheduled Task With Spring Boot

What's left is to add @SchedulerLock to all our @Scheduled jobs that we want to prevent multiple parallel executions.

As part of this annotation, we provide a name for the scheduled task that ShedLock uses as the primary key for the internal shedlock table. Hence this name has to be unique across our application:

For short-running tasks, we should configure the lockAtLeastFor. This prevents our short-running tasks to be executed multiple times due to a clock difference between our application nodes.

In summary, the integration of ShedLock almost takes no effort for our Spring Boot application. Due to the variety of LockProviders, you should be able to use your primary storage solution also for this purpose. What's left is to tweak lockAtMostFor and lockAtLeastFor (if required) for each of your jobs. It might help to monitor the execution time of your jobs and then decide on those values.

The source code for this Spring Boot and ShedLock demonstration is available on GitHub.

Have fun locking your scheduled tasks with ShedLock,

Philip

    • That’s a good point. The only benefit I can see is that this would allow for a smoother transition to a different lock provider where the time of the app server is used.

      Not sure if that’s really an argument for adding lockAtLeastFor to every scheduled job.

      Thanks for this hint!

  • {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
    >