Spring @Transactional - Isolation, Propagation

Spring @Transactional - isolation, propagation

Good question, although not a trivial one to answer.

Propagation

Defines how transactions relate to each other. Common options:

  • REQUIRED: Code will always run in a transaction. Creates a new transaction or reuses one if available.
  • REQUIRES_NEW: Code will always run in a new transaction. Suspends the current transaction if one exists.

The default value for @Transactional is REQUIRED, and this is often what you want.

Isolation

Defines the data contract between transactions.

  • ISOLATION_READ_UNCOMMITTED: Allows dirty reads.
  • ISOLATION_READ_COMMITTED: Does not allow dirty reads.
  • ISOLATION_REPEATABLE_READ: If a row is read twice in the same transaction, the result will always be the same.
  • ISOLATION_SERIALIZABLE: Performs all transactions in a sequence.

The different levels have different performance characteristics in a multi-threaded application. I think if you understand the dirty reads concept you will be able to select a good option.

Defaults may vary between difference databases. As an example, for MariaDB it is REPEATABLE READ.


Example of when a dirty read can occur:

  thread 1   thread 2      
| |
write(x) |
| |
| read(x)
| |
rollback |
v v
value (x) is now dirty (incorrect)

So a sane default (if such can be claimed) could be ISOLATION_READ_COMMITTED, which only lets you read values which have already been committed by other running transactions, in combination with a propagation level of REQUIRED. Then you can work from there if your application has other needs.


A practical example of where a new transaction will always be created when entering the provideService routine and completed when leaving:

public class FooService {
private Repository repo1;
private Repository repo2;

@Transactional(propagation=Propagation.REQUIRES_NEW)
public void provideService() {
repo1.retrieveFoo();
repo2.retrieveFoo();
}
}

Had we instead used REQUIRED, the transaction would remain open if the transaction was already open when entering the routine.
Note also that the result of a rollback could be different as several executions could take part in the same transaction.


We can easily verify the behaviour with a test and see how results differ with propagation levels:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations="classpath:/fooService.xml")
public class FooServiceTests {

private @Autowired TransactionManager transactionManager;
private @Autowired FooService fooService;

@Test
public void testProvideService() {
TransactionStatus status = transactionManager.getTransaction(new DefaultTransactionDefinition());
fooService.provideService();
transactionManager.rollback(status);
// assert repository values are unchanged ...
}

With a propagation level of

  • REQUIRES_NEW: we would expect fooService.provideService() was NOT rolled back since it created it's own sub-transaction.

  • REQUIRED: we would expect everything was rolled back and the backing store was unchanged.

Spring @Transactional Isolation propagation

The scenario 2 where in ..Propagation Require_New ..Is what I have used. And in case of any runtime exception during the parent method, I have handled that exception in try catch and reverted the lock which is updated in the DB as part of the new transaction and threw the same exception in catch block so that parent transaction gets reverted too.

This approach will be difficult in case you have many dB states to be reverted by you individually, but for now it suffice

Overriding transaction propagation levels for methods having Spring's @transactional

I do believe the only option is to replace TransactionInterceptor via BeanPostProcessor, smth. like:

public class TransactionInterceptorExt extends TransactionInterceptor {

@Override
public Object invoke(MethodInvocation invocation) throws Throwable {
// here some logic determining how to proceed invocation
return super.invoke(invocation);
}

}
public class TransactionInterceptorPostProcessor implements BeanFactoryPostProcessor, BeanPostProcessor, BeanFactoryAware {

@Setter
private BeanFactory beanFactory;

@Override
public void postProcessBeanFactory(@NonNull ConfigurableListableBeanFactory beanFactory) throws BeansException {
beanFactory.addBeanPostProcessor(this);
}

@Override
public Object postProcessBeforeInitialization(@NonNull Object bean, @NonNull String beanName) throws BeansException {
if (bean instanceof TransactionInterceptor) {
TransactionInterceptor interceptor = (TransactionInterceptor) bean;
TransactionInterceptor result = new TransactionInterceptorExt();
result.setTransactionAttributeSource(interceptor.getTransactionAttributeSource());
result.setTransactionManager(interceptor.getTransactionManager());
result.setBeanFactory(beanFactory);
return result;
}
return bean;
}

}
@Configuration
public class CustomTransactionConfiguration {

@Bean
//@ConditionalOnBean(TransactionInterceptor.class)
public static BeanFactoryPostProcessor transactionInterceptorPostProcessor() {
return new TransactionInterceptorPostProcessor();
}

}

However, I would agree with @jim-garrison suggestion to refactor your spring beans.

UPD.

But you favour refactoring the beans instead of following this approach. So for the sake of completeness, can you please mention any issues/shortcomings with this

Well, there are a plenty of things/concepts/ideas in spring framework which were implemented without understanding/anticipating consequences (I believe the goal was to make framework attractive to unexperienced developers), and @Transactional annotation is one of such things. Let's consider the following code:

    @Transactional(Propagation.REQUIRED)
public void doSomething() {
do_something;
}

The question is: why do we put @Transactional(Propagation.REQUIRED) annotation above that method? Someone might say smth. like this:

that method modifies multiple rows/tables in DB and we would like to avoid inconsistencies in our DB, moreover Propagation.REQUIRED does not hurt anything, because according to the contract it either starts new transaction or joins to the exisiting one.

and that would be wrong:

  • @Transactional annotation poisons stacktraces with irrelevant information
  • in case of exception it marks existing transaction it joined to as rollback-only - after that caller side has no option to compensate that exception

In the most cases developers should not use @Transactional(Propagation.REQUIRED) - technically we just need a simple assertion about transaction status.

Using @Transactional(Propagation.REQUIRES_NEW) is even more harmful:

  • in case of existing transaction it acquires another one JDBC-connection from connection pool, and hence you start getting 2+ connections per thread - this hurts performance sizing
  • you need to carefully watch for data you are working with - data corruptions and self-locks are the consequences of using @Transactional(Propagation.REQUIRES_NEW), cause now you have two incarnations of the same data within the same thread

In the most cases @Transactional(Propagation.REQUIRES_NEW) is an indicator that you code requires refactoring.

So, the general idea about @Transactional annotation is do not use it everywhere just because we can, and your question actually confirms this idea: you have failed to tie up 3 methods together just because developer had some assumptions about how those methods should being executed.

@Transactional(isolation = Isolation.SERIALIZABLE, propagation = Propagation.REQUIRES_NEW) not working as expected

Using InnoDB engine resolved the issue. To change the engine, use the correct dialect .
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

MyIASM does not support transactions. so that is the reason why no transactions are getting created in the above problem.
Innodb supports transaction and foreign key as well.



Related Topics



Leave a reply



Submit