Transactions in Spring Batch – Part 3: Skip and retry


This is the third post in a series about transactions in Spring Batch, you find the first one here, it’s about the basics, and the second one here, it’s about restart, cursor based reading and listeners.
Today’s topics are skip and retry functionality, and how they behave regarding transactions. With the skip functionality you may specify certain exception types and a maximum number of skipped items, and whenever one of those skippable exceptions is thrown, the batch job doesn’t fail but skip the item and goes on with the next one. Only when the maximum number of skipped items is reached, the batch job will fail. However, whenever there’s a skip, we still want to roll back the transaction, but only for that one skipped item. Normally we have more than one item in a chunk, so how does Spring Batch accomplish that? With the retry functionality you may specify certain retryable exceptions and a maximum number of retries, and whenever one of those retryable exceptions is thrown, the batch job doesn’t fail but retries to process or write the item. Same question here, we still need a rollback for the failed item if a try fails, and a rollback includes all items in the chunk. Let’s see.


As you might know, there are two ways of specifying skip behaviour in Spring Batch. They don’t make a difference regarding transactions. The convenient standard way would be specifying a skip-limit on the chunk and nesting skippable-exception-classes inside the chunk:

  <batch:chunk reader="myItemReader" writer="myItemWriter" commit-interval="20" skip-limit="15">
      <batch:include class="de.codecentric.MySkippableException" />

And if you need a more sophisticated skip-checking, you may implement the SkipPolicy interface and plug your own policy into your chunk. skip-limit and skippable-exception-classes are ignored then:

  <batch:chunk reader="myItemReader" writer="myItemWriter" commit-interval="20" skip-policy="mySkipPolicy"/>

Let’s get to the transactions now, again with the illustration. First we’ll have a look at a skip in an ItemProcessor.

So, if you get a skippable exception (or your SkipPolicy says it’s a skip), the transaction will be rolled back. Spring Batch caches the items that have been read in, so now the item that led to the failure in the ItemProcessor is excluded from that cache. Spring Batch starts a new transaction and uses the now reduced cached items as input for the process phase. If you configured a SkipListener, its onSkipInProcess method will be called with the skipped item right before committing the chunk. If you configured a skip-limit that number is checked on every skippable exception, and when the number is reached, the step fails.
What does that mean? It means that you might get into trouble if you have a transactional reader or do the mistake of doing anything else than reading during the reading phase. A transactional reader for example is a queue, you consume one message from a queue, and if the transaction is rolled back, the message is put back in the queue. With the caching mechanism shown in the illustration, messages would be processed twice. The Spring Batch guys added the possibility to mark the reader as transactional by setting the attribute reader-transactional-queue on the chunk to true. Done that the illustration would look different, because items would be re-read.
Even if you don’t have a transactional reader you might get into trouble. For example, if you define a ItemReadListener to protocol items being read somewhere in a transactional resource, then those protocols get rolled back as well, even though all but one item are processed successful.

It gets even more complicated when we have a skip during writing. Since the writer is just called once with all items, the framework does not know which item caused the skippable exception. It has to find out. And the only way to find out is to split the chunk into small chunks containing just one item. Let’s have a look at the slightly more complicated diagram.

We now get a second loop, indicated with the red colour. It starts with a skippable exception in our normal chunk, leading to a rollback (the yellow line). Now the framework has to find out, which item caused the failure. For each item in the cached list of read items it starts an own transaction. The item is processed by the ItemProcessor and then written by the ItemWriter. If there is no error, the mini-chunk with one item is committed, and the iteration goes on with the next item. We expect at least one skippable exception, and when that happens, the transaction is rolled back and the item is marked as skipped item. As soon as our iteration is complete, we continue with normal chunk processing.
I think I don’t need to mention that the problems with transactional readers apply here as well. In addition, it is possible to mark the processor as non-transactional by setting the attribute processor-transactional on the chunk to false (its default is true). If you do that, Spring Batch caches processed items and doesn’t re-execute the ItemProcessor on a write failure. You just can do that if there is no writing interaction with a transactional resource in the processing phase, otherwise processings get rolled back on a write failure but are not re-executed.

One more thing: what about skipping during reading? I didn’t do a diagram for that, because it’s quite simple: when a skippable exception occurs during reading, we just increase the skip count and keep the exception for a later call on the onSkipInRead method of the SkipListener, if configured. There’s no rollback.


As with the skip functionality, there are two ways of specifying retry behaviour in Spring Batch. The convenient standard way would be specifying a retry-limit on the chunk and nesting retryable-exception-classes inside the chunk:

  <batch:chunk reader="myItemReader" writer="myItemWriter" commit-interval="20" retry-limit="15">
      <batch:include class="de.codecentric.MyRetryableException" />

As with skipping, you may specify your own RetryPolicy and plug it into the chunk:

  <batch:chunk reader="myItemReader" writer="myItemWriter" commit-interval="20" retry-policy="myRetryPolicy"/>

Let’s take a look at the diagram for retrying.

Whenever during processing or writing a retryable exception occurs, the chunk is rolled back. Spring Batch checks if the maximum number of retries is exceeded, and if that’s the case, the step fails. If that’s not the case, all items that have been read before are input for the next process phase. Basically, all limitations that apply to skipping items apply here as well. And we can apply modifications to the transactional behaviour via using reader-transactional-queue and processor-transactional in the same manner.
One important thing: at the time of writing (Spring Batch 2.1.8) there is a bug with a failure during writing. If there’s a retryable exception during writing only the first item gets reprocessed, all other items in the cached list of read item are not reprocessed (https://jira.springsource.org/browse/BATCH-1761).


Spring Batch is a great framework offering functionality for complex processings like skipping or retrying failed items, but you still need to understand what Spring Batch does to avoid problems. In this article we saw potential stumbling blocks when using skip and retry functionality.


  • Riccardo

    Thanks a lot for your article.

    Spring batch skipping behaviour is very smart, but it is also very dangerous from a performace point of view.

  • msns3ka

    21. October 2012 von msns3ka

    Thanks For the Excellent Article.

    Is ItemReader retryable? i.e. if you use jdbccursoritemtreader?

  • Sylvain

    3. December 2012 von Sylvain


    First of all thank you for this realy interesting article.
    To go further, we have a question about the restartability of spring batch.
    In order to tests it, we have developped a very basic job which read proceed and write a flat file
    using chunks with a commit inteval = 20.
    The processor is first configured to throw a Business Exception on line 73.
    Running the job we got:
    80 lines read 60 lines written in the table: BATCH_STEP_EXECUTION
    So, the restart of the job start reading the line 81 and not the 61 and the lines 61 to 80 are not proceed by neither of the two runs !
    Is ther something we missed ???
    Thank you by advacne for your answer !

    • Tobias Flohre

      5. December 2012 von Tobias Flohre

      Hi Sylvain,

      if you use a proper transaction management, it should not be like you wrote it. Are you using the DataSourceTransactionManager? The business exception should cause a rollback so that the restart would start with 61. Sorry, but I cannot say more without seeing the whole configuration.

  • Anuj Kumar

    14. May 2013 von Anuj Kumar

    Excellent article and very well explained. It helped me a lot.

  • Prateek

    8. September 2013 von Prateek

    Thanks for taking out time to share your findings. It helped me.

  • Binh Thanh Nguyen

    17. December 2013 von Binh Thanh Nguyen

    Thanks, nice post

  • yuc

    it is really a wonderful article.
    I am puzzled by one issue from our Spring Batch project quite a long time.

    The issue is that while processing exceptions by onskipinprocessor and onskipinwriter, the db connection will increased sharply.

    Now I know that it is from the seperated session for each item in reader catch.

    I will avoid using the onskip… in spring batch.
    It is really of high risk.

  • Ash McConnell

    Nice post, do you happen to know if it’s possible to define the “skippable-exception-classes” using Java Config?


  • shrihari

    10. July 2014 von shrihari

    i want to know, just like for a step or job we get all the failure exceptions for e.g. stepContext.getFailureExceptions() gives u all the exceptions in that step. How do I get any exception thrown at a chunk level?

  • Anil Sharma

    18. November 2014 von Anil Sharma

    thanks for great article.
    I am currently using sqlldr to load records but trying to move to Spring batch job now.
    I have around million records to be uploaded in DB daily. SQlLdr takes arounnd 8-9 minutes.
    How can I improve the performance using Spring batch. If II use commit-interval too large, whenever exception is thrown, spring tries 1 record at time which slows the process. I f I try smaller commit-interval performance is horrible.
    Can we do a pre processing on data file and skip records if it is missing values before Spring processes it? Any ideas are appreciated.


  • Romain L

    19. November 2014 von Romain L

    Thanks a lot for this very clear and precise article. You saved my day !
    @Anil Sharma : I’m having the same problem. What I’ll do is process my items in the processor so that I’m sure no write exception will occur.


Your email address will not be published. Required fields are marked *