Blogs

Tips, tricks, and Springs: Best Spring practices

Category
Software development
Tips, tricks, and Springs: Best Spring practices

In this blog post, I’m going to share some philosophy, tips, and tricks regarding the Spring framework which I consider to be good practices. It is a high-level overview of some great infrastructure utilities that Spring has to offer us as developers and how they best integrate into real-world applications and scenarios. Much of the best programming practices, in general, are also a part of Spring frameworks’ architecture and programming mindset (which makes it so great), so there may be a lot of overlap between general best programming and OOP practices and connecting that with what Spring brings to the table.

I am going to cover a couple of topics that I found important for this kind of overview. The topics are not necessarily connected or bound by some specific order. With that in mind, let us dive in!

Dependency Injection Methods

Spring offers us multiple ways to perform dependency injection. At its core, it is an IoC and dependency inversion container. It implements the Dependency Injection pattern which is really important in large-scale applications so that object instantiations are not hardcoded and we can maintain more flexibility while refactoring and redesigning our classes. Change is inevitable and the DI pattern is great in mitigating the issues that change can bring. There are three main ways to inject dependencies into our Spring Managed Beans. 

Field based Injection

One of the most straightforward ways of dependency injection, which was way more popular a couple of years ago and declining lately, is field-based injection. It is done by simply annotating a class member variable with @Autowired annotation and that’s it. You are set to go. I found that this type of injection might still be useful in some scenarios when using inheritance with beans.

For example, if there is a superclass that contains a specific constructor and depends on something that we do not want or don’t have access to in our subclass constructor, we can make a workaround by utilizing field-based injection and that way the default construction strategy of the superclass is preserved. It does seem like a bit of a hack, but situations like this could occur when working with some libraries and frameworks.

On the other hand, the downside of this injection method is that it is easy to get carried away and autowire a bunch of dependencies which can very easily lead to the violation of the Single Responsibility Principle which means that we have violated the cohesion of our class and it tends to lead to messy code that’s harder to maintain. Also, you should be very careful to avoid private access modifier on @Autowired fields, because if you don’t Spring has to override the access to the private field by using reflective java.lang.reflect.Field.setAccessible call which has worse performance and is conceptually suboptimal.

Constructor Based Injection

This one is the most popular lately and is an official recommendation by the Spring Team. It addresses the issue mentioned above about violating the cohesion of our class by including too many dependencies in a simple way that is visually much easier to spot when your constructor has way more parameters than it should. It isn’t a hard constraint, but to engineers that care about the clean design of their code, this will be an obvious red flag that something could be wrong with the design of this class.

However, much more important than that, I think, is the intuition behind a constructor as a building mechanism of a class. It’s the most intuitive method and it also allows us to declare our class members as final (or immutable for members that do not expose internal state as they really shouldn’t). More to this, it is not mandatory to use an @Autowired annotation which is great because we avoid coupling to Spring itself, and if we wished to create an instance of this class manually or using a different framework, we do not need to do any refactoring in our code.

Setter Based Injection

Last but not least is the setter-based injection. For this injection method, we must declare setter methods and annotate them as @Autowired. It shares the same drawback as a field-based injection in the sense that the number of dependencies can grow more easily and be harder to spot than using constructor-based injection.

More to that, we lose the ability of immutable class design as opposed to constructor injection, and to me, it seems like a generally more error-prone process since we could be lead to include additional logic to depend on other dependencies that may have not been set yet or we could override an already injected dependency by a constructor. On the upside, it may be useful in some scenarios where we truly want to use it in conjunction with constructor-based dependency injection where dependencies injected via constructor are considered mandatory and the ones injected through setter may be optional based on some additional logic.

Personal preference

Ultimately, I prefer using constructor-based injection whenever possible for the obvious reasons stated above. It allows for immutable class design, it is very intuitive and not dependent on specific framework technicalities and it is easier to spot when our dependency graph is getting too heavy for this class and maybe consider breaking it down into multiple more cleaner classes.

Application context bean registration 

The central part of the Spring application is an application context. In the previous topic, we talked about Dependency Injection and to make this pattern feasible, we should have some data structure that serves as a common store for the managed beans that can be injected as dependencies. This is where application context comes into play. If we want to use Dependency Injection we need to tell Spring which classes should be injectable and provide a blueprint on how to create them.

Traditionally, the blueprints were defined via XML files and it was the first method of defining beans in Spring. Today it is not recommended anymore, but you may still stumble upon it in some legacy projects, especially prior to Spring Boot. What is recommended today is to use Spring’s annotation-based model for bean declaration wherever possible. This includes @Component as a base annotation and then the component stereotypes such as @Controller, @Service, @Repository to bring more semantics into our class annotation metadata and to mark certain classes for special behavior supported by Spring, such as @Controller and all the routings and serialization features it provides.

However, in situations where our beans are more complex and cannot be simply declared by an annotation, we can use a Java-based bean configuration which, instead of providing bean metadata as in traditional XML, works by providing factory methods annotated by @Bean annotation which Spring can scan and call during application startup or runtime to build an appropriate bean instance.

Now, given class Agency04, here are application context bean registration examples:

public class Agency04 {
        private final String address;
        private final SoftwareDevelopmentService providedService;
    
        public Agency04(final String address, final SoftwareDevelopmentService providedService) {
            this.address = address;
            this.providedService = providedService;
        }
    }

Raw Java class as bean candidate.

<?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd">
 
        <bean id="agency04" class="com.agency04.blog.springtips.Agency04" scope="singleton">
            <constructor-arg name="address" value="Ulica Republike Austrije 33"/>
            <constructor-arg name="providedService" ref="softwareDevelopmentService" />
        </bean>
 
        <bean id="softwareDevelopmentService" class="com.agency04.blog.springtips.SoftwareDevelopmentService" />
    </beans>
 

Traditional XML bean declaration – not very popular anymore, but good because it avoids coupling class definition to Spring annotations.

@Component
    public class Agency04 {
        private final String address;
        private final SoftwareDevelopmentService providedService;
    
        public Agency04(@Value("Ulica Republike Austrije 33") final String address,
                        final SoftwareDevelopmentService providedService) {
            this.address = address;
            this.providedService = providedService;
        }
    }

Declarative annotation-based bean declaration – very popular and recommended even though it couples your code with Spring through @Component annotation usage. Notice that @Autowired is not required on the constructor, as mentioned earlier.

@Configuration
public class BeanConfiguration {
    
        @Bean
        public SoftwareDevelopmentService softwareDevelopmentService() {
            return new SoftwareDevelopmentService();
        }
    
        @Bean
        public Agency04 agency04(
                @Value("Ulica Republike Austrije 33") final String address,
                final SoftwareDevelopmentService softwareDevelopmentService
        ) {
            return new Agency04(address, softwareDevelopmentService);
        }
    }

Java configuration-based bean registration – also very popular and mostly used when heavier configuration and bean creation logic is required, doesn’t couple bean source code with Spring.

Bean types and why they really matter

Spring supports different bean types which differ by scope. The four main scopes are Singleton, Prototype, Request, Session, and since the latest Spring versions there are two more, Application and WebSocket respectively. You can find more about the technicalities of each scope in the official documentation. Instead, I would like to focus more on some good practices and examples of how these scopes can be utilized. 

One of the most important aspects to have in mind when talking about scopes is the concurrency model of the application. In this post, I’m going to focus on Spring MVC’s concurrency model which is based on a thread pool where each user request is delegated to one thread that executes the whole request synchronously. In other words, each HTTP request, for example, maps to one thread in a 1:1 fashion. There are other concurrency models such as in the Spring Webflux module, but that is outside of the current scope (pun intended). So why does this matter? 

Consider a Singleton bean scope (also a Spring default if not specified differently) which means that Spring will create only and exactly one instance of the class which will then be reused in all other beans into which it was injected. There is one catastrophically bad mistake that you could make if unaware of this and that is maintaining state in singleton scoped beans. Since it is the only instance that will be reused across the whole application and by all of the threads, you could accidentally create race conditions and some nasty bugs which could manifest in weird and inconsistent behavior since all the user request executions would rely on some shared state which may be relevant and correct for one user, but completely and atrociously wrong for another.

In contrast, if you have a use case where you need to maintain a state other than immutable constants, it would be much wiser to rely on Prototype and Request scopes for example. The prototype has the smallest scope and each time it is requested from the application context, the new bean instance is provided. This is great if you want to have access to application context on bean creation, but use all of the benefits of a simple object containing some state that is being mutated. It is considered thread-safe from the singleton story perspective above since you get the new instance every time. 

In addition, I would like to dedicate some attention to the Request scope as well since I think it can be extremely useful in some cases. Since the Request bean scope means that the application context will return the same instance within one HTTP request duration, for example, it is an ideal candidate for HTTP request level caching which can be used to optimize your backend performance in cases where you do multiple databases or some external service calls in different places within one same request.

In conclusion, by now you should get the idea of why bean scopes play an important role in Spring and why they should be used with care and awareness because instead of nasty bugs, you could use them to significantly improve the quality and performance of your code.

Abstraction – coding to an interface

By now, if you are not a junior or apprentice developer, you should have heard about an expression “coding to an interface” or an idea of defining dependencies as abstractions as to not depend on the concrete details of implementation for some specific unit of execution. This is not only considered good practice in Spring but overall in programming since it allows for better code decoupling, easier refactoring, and reuse.

This generally means that higher-level components or abstractions should depend on other higher-level components while low-level details should be abstracted away and encapsulated in concrete component implementations. It allows us to maintain logical relationships between our components, or a dependency graph if you like, and be able to easily switch the implementation of the interface behind a scene without the need to do any refactoring since the API contract remains the same.

One useful scenario for this could be managing your running and testing environments. For example, if you want to mock the functionality of some specific bean in your tests, you can create a different implementation for the same interface in the test application context and use that implementation while maintaining the existing structure of all the other code. 

Moreover, there is one other, often hidden, reason, why you should consider programming to an interface in Spring and that, is one of the ways Spring handles your beans under the hood. Usually, when you annotate your class or methods by a bunch of different annotations, Spring provides the “magical” infrastructure features by creating proxy objects around your class (for more information read on Proxy design pattern). Some examples are @Transactional and @Cacheable annotations respectively. Now, what happens when you don’t use an interface is that Spring relies on bytecode generation and manipulation libraries to enhance your classes with these extra features. Examples of these libraries on JVM are CGLib, Javaassist, and ASM. However, if you were to implement an interface on your class, Spring can leverage a JDK native proxying mechanism called Java Dynamic Proxy so there is no need to rely on third-party libraries and it’s often cleaner and faster. 

These are the principles that we, as engineers, should always strive for.

Externalize environment variables and configuration

Never keep configuration values or even worse, secrets, in your applications’ source code. Spring provides us with a well-known application.properties file (application.yaml is also supported) or application-profileName.properties files which can be utilized by running an application on a specific infrastructure profile. These values can then be “autowired” into your Spring Managed Beans similarly to other dependency beans. For this, you would use @Value annotation for wiring values individually or @ConfigurationProperties annotation on a simple POJO class.

Suppose we have an application.properties file:

    agency04.contact.address=Ulica Republike Austrije 33

    agency04.contact.phone=099 998 1532

    agency04.contact.email=info@ag04.com

Spring allows us to auto-wire these values into our beans by property key

@Component
    public class Agency04 {
        private final String address;
        private final SoftwareDevelopmentService providedService;
    
        public Agency04(@Value("${agency04.contact.address}") final String address,
                        final SoftwareDevelopmentService providedService) {
            this.address = address;
            this.providedService = providedService;
        }
    }

or if we want to create a properties class holder, we could do

@Component
    @ConfigurationProperties(prefix = "agency04.contact")
    public class Agency04Properties {
        private String address;
        private String phone;
        private String email;
 
        // getters and setters
    }
 
    @Component
    public class Agency04 {
        private final Agency04Properties agency04Properties;
        private final SoftwareDevelopmentService providedService;
 
        public Agency04(final Agency04Properties agency04Properties,
                        final SoftwareDevelopmentService providedService) {
            this.agency04Properties = agency04Properties;
            this.providedService = providedService;
        }
    }

Always externalize your bean configuration values, such as database connection config, some external service connection config or something similar. This allows for cleaner configuration tracking since everything is centralized in one place instead of having to track different constants across the source code. In addition, it is also a good idea to externalize some part of the configurations into a datastore if you wish to have more flexibility and be able to change your application configuration at runtime without the need to redeploy. 

These are generally the two most common reasons why you would want to do this. Moreover, you have already partially prepared your application hosting infrastructure to be able to parametrize these same configuration properties which are very useful and something we should always have in mind since the configuration is half of the solution for well-designed applications and systems. 

One more thing I would like to mention regarding this topic is secrets such as passwords, API keys, and similar. Always avoid hardcoding these values into application code, even in properties files. Spring allows for property value placeholders in properties files so that values like this can be passed externally at the process start time.

If we add this to the previous application.properties

agency04.secret=${agency04Secret}

and pass the property at the time of starting the application, the placeholder will resolve into the passed value.

java -Dagency04Secret=SecretValue -jar applicationName.jar 

This also has the benefit of not having to commit your secret keys into git repositories and such so that you could have a clear and centralized secret key storage and exchange policy for not just one application, but the whole infrastructure which is always a good idea.

Another solution to this problem could be storing encrypted secrets into properties files and then at application startup provide symmetric encryption/decryption key so that the application can “unpack” the keys before wiring them into configuration components. There is a great and simple way to achieve this in spring boot by using, for example, the Jasypt library via jasypt-spring-boot-starter autoconfiguration dependency.

For more information, you can check it out here https://github.com/ulisesbocchio/jasypt-spring-boot .

Cohesion chaos – Transaction Script Pattern

The Transaction Script Pattern is a procedural pattern where all logic is organized in a single procedure that executes sequentially and is tasked with orchestrating a series of persistence calls or business logic transformations with the goal of finishing in a consistent data state with respect to the business rules. This is one of the very common patterns we come across throughout the industry. More to that, this pattern can also be seen on projects driven by a quick and dirty management mindset.

Now the pattern itself isn’t at all bad and definitely has its use cases. The reason is simple, and that is that it is very easy to write code like this and it is one of the most primitive forms of abstraction. It becomes, even more, easier when working with layered architecture (controller-service-repository) that is quite popular with enterprise applications and Spring. And, as I have said, it is not at all that bad. We as engineers should value simplicity and be careful not to over-engineer where that is not needed. However, what I have found out with this style and mindset is that we tend to write more procedural code in environments where we should really be more focused on OOP and its holy grail, strong cohesion, and encapsulation. 

Consider an example below.

public class SimpleNumber {
        private int value;
    
        public SimpleNumber() { }
    
        public SimpleNumber(final int value) {
            this.value = value;
        }
    
        public int getValue() {
            return value;
        }
    
        public void setValue(final Integer value) {
            this.value = value;
        }
 
        // equals, hashcode ...
    }

 
    public class SimpleNumberDescriptionDto {
        private final int value;
        private final boolean isEven;
        private final boolean isPrime;
    
        public SimpleNumberDescriptionDto(
                final int value,
                final boolean isEven,
                final boolean isPrime
        ) {
            this.value = value;
            this.isEven = isEven;
            this.isPrime = isPrime;
        }
    
        public int getValue() {
            return value;
        }
    
        public boolean isEven() {
            return isEven;
        }
    
        public boolean isPrime() {
            return isPrime;
        }
 
        // equals, hashcode ...
    }
 
 
    @Repository
    public class SimpleNumberRepositoryImpl implements SimpleNumberRepository {
        private final List<SimpleNumber> simpleNumberList =
            List.of(
                new SimpleNumber(7),
                new SimpleNumber(56),
                new SimpleNumber(115),
                new SimpleNumber(311),
                new SimpleNumber(512),
                new SimpleNumber(8719),
                new SimpleNumber(12344)
            );
 
        @Override
        public Collection<SimpleNumber> getAll() {
            return simpleNumberList;
        }
    }
 
 
    @Service
    public class SimpleNumberServiceImpl implements SimpleNumberService {
        private final SimpleNumberRepository repository;
 
        public SimpleNumberServiceImpl(final SimpleNumberRepository repository) {
            this.repository = repository;
        }
 
        @Override
        public Collection<SimpleNumberDescriptionDto> getAllDescriptionDto() {
            return repository
                    .getAll()
                    .stream()
                    .map(simpleNumber -> {
                        final int value = simpleNumber.getValue();
                        final boolean isEven = value % 2 == 0;
                        boolean isPrime;
                        if (value <= 1) {
                            isPrime = false;
                        } else if (value == 2) {
                            isPrime = true;
                        } else if (value % 2 == 0) {
                            isPrime = false;
                        } else {
                            isPrime = true;
                            for (int i = 3; i <= Math.sqrt(value); i += 2) {
                                if (value % i == 0) {
                                    isPrime = false;
                                    break;
                                }
                            }
                        }
                        return new SimpleNumberDescriptionDto(
                                value,
                                isEven,
                                isPrime
                        );
                    })
                    .collect(Collectors.toList());
        }
    }

The thing that bugs me the most with this approach is the mixing of high-level and low-level coding details. We have a repository that is a high-level component and should be hidden behind an interface. It performs a high-level operation of fetching some data from the datastore. Then immediately after that, we perform some low-level detail transformation logic on the data that is not very coherent with the place where it is executed.

If you have a couple of these low-level transformations that need to occur, your transaction script methods could grow to a couple of hundred lines of code which will be pretty hard to track. You could perfectly validly argue that these low-level details could be broken down into a number of smaller methods or static helper functions.

For example:

@Service
public class SimpleNumberServiceImpl implements SimpleNumberService {
        private final SimpleNumberRepository repository;
 
        public SimpleNumberServiceImpl(final SimpleNumberRepository repository) {
            this.repository = repository;
        }
    
        @Override
        public Collection<SimpleNumberDescriptionDto> getAllDescriptionDto() {
            return repository
                    .getAll()
                    .stream()
                    .map(simpleNumber -> {
                        final int value = simpleNumber.getValue();
                        final boolean isEven = Calculator.isEven(value);
                        final boolean isPrime = Calculator.isPrime(value);
                        return new SimpleNumberDescriptionDto(
                                value,
                                isEven,
                                isPrime
                        );
                    })
                    .collect(Collectors.toList());
        }
    }
 

That is better since we somewhat separate high-level code from low-level code and it would allow us for better code reuse since we could rely on the helper functions throughout our whole application. And this is sometimes totally fine, but what I am getting at is that this leads to more and more procedural programming in an environment where OOP principles should be a more natural choice.

We created specific low-level utility and helper functions when this specific logic could have easily been abstracted away into the object that holds the data itself. Why should the service as a high-level orchestrating component be aware of the prime calculation intricacies and the lower-level component that implements the algorithm?

In my opinion, this is a detail that should be solved and encapsulated in the domain class itself. Generally, we could use getter and setter methods for such things since that right there is the boundary between the raw data and the format that we want to present to the world. In this concrete case, we use additional methods which perform the calculations or, in other words, encapsulate the business logic of this specific domain.

This is what encapsulation is all about and if we check our previous example refactored in this manner, we now see that there is no more need to even do any additional logic in the service layer and there is a nice boundary between high level and low-level code components. All the pieces fall down and align perfectly.

public class SimpleNumber {
        private int value;
 
        public SimpleNumber() { }
 
        public SimpleNumber(final int value) {
            this.value = value;
        }
 
        public boolean isEven() {
            return Calculator.isEven(value);
        }
    
        public boolean isPrime() {
            return Calculator.isPrime(value);
        }
 
        // getters, setters, equals, hashcode
    }
 

 
    @Service
    public class SimpleNumberServiceImpl implements SimpleNumberService {
        private final SimpleNumberRepository repository;
 
        public SimpleNumberServiceImpl(final SimpleNumberRepository repository) {
            this.repository = repository;
        }
        
        @Override
        public Collection<SimpleNumberDescriptionDto> getAllDescriptionDto() {
            return repository
                    .getAll()
                    .stream()
                    .map(simpleNumber -> new SimpleNumberDescriptionDto(
                            simpleNumber.getValue(),
                            simpleNumber.isEven(),
                            simpleNumber.isPrime()
                    ))
                    .collect(Collectors.toList());
        }
    }

Sometimes a discouraging problem with this approach can be existing tools in the framework which we use, for example, Java Reflection, which Spring uses to do a bunch of stuff behind the scenes, and for that, you need to have a dumb boilerplate getter and setter methods. This is very common in the Spring ecosystem and included in libraries such as ORM implementations and data serializers or deserializers. However, it doesn’t mean that the guiding principle that I am talking about cannot be achieved. More to that, we should work hard that to isolate our core business logic into classes and components that are not bound to any framework and that is technology agnostic to naturally write future proof code as much as possible.

Just because we use some specific framework, such as Spring, to handle a lot of infrastructure boilerplate code, it doesn’t mean we should forget all of the great coding principles formed throughout the years of computer science and engineering. Instead, we should consider using the best of both worlds, since believing it or not, those same frameworks that we are using are built on the same guiding principles that we so freely tend to disregard in our transaction script pattern.

Centralized exception handling

Exception handling is a cross-cutting concern. This means that it is an aspect of application infrastructure that can be generalized and should be implemented in a centralized and universal manner so that it can be reused for different application use cases. It would be a bad idea if we tried to add custom error handling to each individual use case within our application. That would probably result in a lot of boilerplate code and code duplication which would be very hard to maintain as the application grows. 

Luckily, Spring has some great features to aid us in implementing global error handling interceptors, completely decoupled from the existing code, as all cross-cutting concerns should be implemented. One example of such tools is @ControllerAdvice and @RestControllerAdvice annotations respectively. The conceptual idea behind these annotations comes from aspect-oriented programming which is a popular technique in frameworks like Spring for implementing cross-cutting concerns. Advice represents some specific logic or cross-cutting concern that we are implementing and Controller or RestController represents a so-called pointcut or where we want this advice to apply. So in these terms, these two annotations tell Spring to generate a wrapper or a proxy around your @Controller or @RestController classes which are usually entry points into Spring applications from outside. Because of this, the aforementioned annotations are a great tooling candidate to implement global exception handling.

Since controllers are entry points to your application business logic, any exception that may occur is going to get propagated back all the way to the controller and then further down the call stack, but we are going to intercept these exceptions using our controller advice.

@RestControllerAdvice
public class GlobalErrorHandler {
 
        @ExceptionHandler(EntityNotFoundException.class)
        public ResponseEntity<BaseRestError> handleEntityNotFoundException(final Exception exception) {
            return new ResponseEntity<>(
                    new BaseRestError("Resource not found"),
                    HttpStatus.NOT_FOUND
            );
        }
        
        @ExceptionHandler(Exception.class)
        public ResponseEntity<BaseRestError> handleGenericException(final Exception exception) {
            return new ResponseEntity<>(
                    new BaseRestError("Oops! There's been an internal server error"),
                    HttpStatus.INTERNAL_SERVER_ERROR
            );
        }
        
        private static class BaseRestError {
            private final String message;
    
            public BaseRestError(final String message) {
                this.message = message;
            }
    
            public String getMessage() {
                return message;
            }
        }
    }

So, as mentioned, @ControllerAdvice and @RestControllerAdvice annotations generate a proxy around our controller and by using @ExceptionHandler annotations, Spring will watch out for any exceptions that get thrown upstream and try to match them to exception types defined within the annotation. If the types are matched, the implemented handler method is called respectively. This is very popular and powerful way to implement global error handling in 

Spring. 

There are other ways to achieve this concept, but it almost always comes down to implementing some kind of a proxy or a higher-order component. If you would like to try out a more programmatic implementation with a bit fewer annotations and “magic“, you should consider implementing a higher-order component/function, an idea from functional programming where we pass a function as an argument or return a function from another function. 

An example below tries to demonstrate the HoC implementation of the controller advice above. 

@Component
    public class GlobalErrorHandlerHoc {
 
        public <T> ResponseEntity<?> withErrorHandling(final Supplier<T> valueSupplier) {
            try {
                final T payload = valueSupplier.get();
                return ResponseEntity.ok(payload);
            } catch (EntityNotFoundException exception) {
                return new ResponseEntity<>(
                        new BaseRestError("Resource not found"),
                        HttpStatus.NOT_FOUND
                );
            } catch (Exception exception) {
                return new ResponseEntity<>(
                        new BaseRestError("Oops! There's been an internal server error"),
                        HttpStatus.INTERNAL_SERVER_ERROR
                );
            }
        }
 
        private static class BaseRestError {
            private final String message;
    
            public BaseRestError(final String message) {
                this.message = message;
            }
    
            public String getMessage() {
                return message;
            }
        }
    }

Call example:

final var response = globalErrorHandlerHoc.withErrorHandling(() -> simpleNumberService.getAllDescriptionDto());

Do not keep your business logic in the controller layer

Controllers are entry points into Spring applications. For this reason and to keep our business logic independent of communication protocols, we should never keep the logic in the controller layer itself. Controllers are like ports where data arrives and they should only be dealing with transporting mechanisms like protocol specifics and unpacking the data into a structure that can be processed in the further business logic layers. It is a piece of infrastructure that wraps the external communication processes from the business logic rules that our application implements. This same concept applies to the repository layer as well. Same as the controller layer deals with client communication, a repository layer also deals with external communication but this time it is with the datastore.

These kinds of points of external communication, whether it’s the downstream client, a datastore, or an external service should always be abstracted away from the application business logic, separated by a clean interface. This way we achieve that our business logic implementation stays clean, portable, and independent of specific infrastructure code, frameworks, or protocols used for outside data exchange.

Use Spring Boot Starters – Auto Configurations

Many say that Spring Boot is a very opinionated framework since it imposes a bunch of auto configurations and conventions on how you should write and structure your code. I, personally, do not see this as a negative thing, but rather something that can boost your productivity significantly.

Prior to the Spring Boot era, Spring was known for its notoriously complex setup configurations which could easily take days to set up and adapt for specific underlying application infrastructure and once you set those up, you would not be too happy about changing it often. With Spring Boot, it provides a bunch of starter wrapper dependencies for specific libraries and technologies which already implement a bunch of these setups for you, so that you can just tell spring which modules you want to include, for example, what type of web layer and persistence layer you want to use and you have a working application infrastructure in the manner of seconds instead, potentially, days. 

This is what’s often referred to as auto-configuration in Spring Boot. All the auto configurations for different technologies are further configurable through application.properties or you can even override certain beans from the autoconfiguration to adapt them to your specific needs. There are ways to disable default configurations completely, such as @EnableWebMvc or @EnableWebSecurity, Spring MVC, and Spring Security specific annotations which disable the default setup and give you full control of the configuration.

Sometimes this is preferable, but usually, I find it safer to rely on the default configuration and override it was needed to achieve the desired behavior. Another very cool thing about Spring Boot auto configurations is that Spring Boot provides you with tools to write your own auto configurations which become extremely useful when you are an organization that has a set of specific coding standards or even common libraries for which you would like to add custom auto configurations to make them easier and faster to integrate into new applications.

Auto configurations are great and they make our lives much easier, so you should rely on them whenever possible.

Importance of transactional data handling

A very important topic, often overlooked by younger and more inexperienced developers is transactional data handling while communicating with some datastore. Data consistency in highly concurrent environments is a very hard problem in computer engineering and should be treated with great care. Most of the time, we want the changes on our datastore to be executed in a transactional manner, which means execute all successfully or nothing if there’s an error. Relational databases, for example, are widely used because of their solid transactional properties among many other benefits. This is a very important property that ensures that our data does not end up in an inconsistent state. There are several transaction isolation levels that tell relational database engines how they should handle concurrent transactions which could potentially be applying changes to a shared set of records.

They often represent a trade-off between performance and potential data inconsistency anomalies, but that is, however, beyond the scope of this blog post (read more here https://vladmihalcea.com/tag/isolation-levels/). What is important for us at the moment is that Spring offers powerful tools to work with transactional data handling with minimal effort. 

The most common and easiest way is by using @Transactional annotation which is auto-configured by default depending on your persistence layer technology and auto-configuration. It is enough to annotate a method inside a Spring Managed Bean with @Transactional and Spring will make sure that all the communication downstream that method is executed in a transactional context and that it is communicated properly to the underlying database driver. It is also possible to set the annotation on a class level, meaning that all the methods will be executed with transactional context. 

Earlier I wrote about how Spring generates proxies for your annotated classes to handle the boilerplate logic automagically. The same mechanism occurs when using @Transactional annotation. There are some potential pitfalls that I should mention here. Since the boilerplate logic for opening and closing transactions are handled by the proxy before and after our actual method call, naturally only public methods can be annotated as @Transactional. More importantly, you should be careful to make sure that methods that represent entry points to your persistence logic are always annotated with @Transactional. The reason for that is that most of the time you want to execute persistence logic in a transactional context and there is nothing to lose in comparison to not using @Transactional by default and accidentally creating a vulnerability for your data consistency.

Additionally, one caveat with proxies is that if you call a public method that is not annotated, and then from within that method you call another method in the same class which is annotated, the transactional context won’t apply because you called an annotated method from within the same class and the call never went through the proxy meaning transaction never got opened. Another reason why it’s good to use @Transactional by default is that, if you don’t, Spring will open a transaction for each database call individually which can be wasteful because of added overhead. Even if your method contains only one database call, it is good to annotate it for consistency’s sake, because the underlying result is still the same. Also, @Transactional annotation offers us a property readOnly flag. You should set this to true when you execute read-only queries because it notifies the database engine that it can optimize this transaction since reads are always much more simple to synchronize than writes and you can avoid writing to databases’ transaction log and so on. 

In conclusion, a good rule of thumb when working with Spring and relational databases or other data stores that support this is to prefer using @Transactional by default. Just be careful to use it on the persistence layer, where it belongs, and not in some arbitrary/incoherent place. (e.g. @Controller class).

Importance of validation

System input validation is one of those 101 application development principles which sometimes don’t get as much attention as it really should. There are things that can easily get overlooked when one is in rapid development mode. Validation is a front line between your system and user input data and we should take it very seriously to make sure that this data is in accordance with the business rules and constraints of our system. 

If you imagine a read-only system, where users don’t submit nor modify any data, you quickly realize how much complexity is removed from the system itself. No need for concurrent transactional boundaries and synchronization mechanisms, no validation required, much much easier to secure the application since the absence of writes eliminates a plethora of attack vectors and much easier to change your system with time since it always relies on the same underlying data format.

These are all the reasons why validation is extremely important and should be implemented with care. It’s not just about making a few fields required or setting the maximum length or size, but we need to think bigger and how will this particular data instance fit into the existing system. There could be some global validation rules to consider, such as allowing to save something only if some criteria in the already existing data is met, or validating the state of some fields with respect to a different part of the object state. Also, security aspects of the data should be considered and the data should be properly analyzed and sanitized before passing it into the upper layers of the system. 

Spring offers us multiple ways to handle data validation, but the most useful one and the one I am going to give an example of is the JSR-303/JSR-380 java specification which Spring supports and implements. It is a declarative bean validation API which means that we use annotations to add validation metadata to our object fields which are then evaluated at runtime.

This is a great method to handle validation since it is very easy to decouple the implementation. Basically, you use some standard factory validation primitives such as @NotNull, @NotBlank, @Size, or @Length and if you need anything more advanced, you can easily implement your custom annotations, bind a validator component to them, all with respect to JSR-303 spec. I think this is much cleaner than a traditional Validator interface in Spring for example since the implementations are decoupled and you do not have to auto-wire validator components into your beans and call them programmatically or use binder configurations to connect the type with the specific validator implementation.

@Component
    public class SomeDataFormValidator implements Validator {
 
        private static final int MAX_VALUE_LENGTH = 16;
    
        public boolean supports(Class clazz) {
           return SomeDataForm.class.isAssignableFrom(clazz);
        }
    
        public void validate(Object target, Errors errors) {
           ValidationUtils.rejectIfEmptyOrWhitespace(errors, "valueField", "field.required");
           SomeDataForm form = (SomeDataForm) target;
           if (form.getValueField() != null
                 && form.getValueField().length() > MAX_VALUE_LENGTH) {
              errors.rejectValue("valueField", "field.max.length",
                    new Object[]{Integer.valueOf(MAX_VALUE_LENGTH)},
                    "The value must not contain more than " + MAX_VALUE_LENGTH + " characters in length.");
           }
        }
    }

Traditional validator component example:

public class SomeDataForm {
        private static final int MAX_VALUE_LENGTH = 16;
        
        @NotBlank(message = "{somedataform.valuefield.notblank}")
        @Size(max = MAX_VALUE_LENGTH, message = "{somedataform.valuefield.size}")
        private String valueField;
    
        // getter, setter, equals, hashcode
    }

JSR-303/380 validation example – IMPORTANT: Controller method which receives the data, for example, “public ResponseEntity<Void> post(@Validated @RequestBody SomeFormData someFormData)” must be annotated with @Valid, which is original specification annotation, or @Validated like in this example which is Spring specific and supports some additional functionalities as validations groups.

Now, if we need a more complex validation rule, a custom JSR-303/380 validator implementation could be as follows:

@MayContainOnlyABCDCharacters
    public class SomeDataForm {
        private static final int MAX_VALUE_LENGTH = 16;
    
        @NotBlank(message = "{somedataform.valuefield.notblank}")
        @Size(max = MAX_VALUE_LENGTH, message = "{somedataform.valuefield.size}")
        private String valueField;
    
        // getter, setter, equals, hashcode
    }

 
    @Constraint(validatedBy = MayContainOnlyABCDCharactersValidator.class)
    @Target({ElementType.FIELD, ElementType.TYPE})
    @Retention(RetentionPolicy.RUNTIME)
    public @interface MayContainOnlyABCDCharacters {
    }
 
    
    public class MayContainOnlyABCDCharactersValidator
        implements ConstraintValidator<MayContainOnlyABCDCharacters, SomeDataForm> {
 
        @Override
        public void initialize(final MayContainOnlyABCDCharacters constraintAnnotation) {
            // NOOP
        }
    
        @Override
        public boolean isValid(final SomeDataForm value, final ConstraintValidatorContext context) {
            var valueField = value.getValueField();
            if (valueField == null || valueField.isBlank()) {
                // leave this validation aspect to other annotations
                // since it is not this validators concern
                return true;
            }
    
            return valueField.matches("^[ABCD]+$");
        }
    }

Testing

Automated software testing must be made mandatory. All great software engineering practices include automated testing as one of the important aspects of the solution. Tests ensure that our code runs as expected and guarantees easier refactoring and code change. Change is inevitable and you could argue that tests add additional overhead because you have more code to maintain, but there’s nothing worse than having to change a crucial part of the system that has been in production for some time and that doesn’t have proper tests associated with it.

You have to test everything manually, ideally multiple times and by multiple different people (peer testing) because you have no way of telling if you violated some of the imposed business invariants. This is a very error-prone, tedious process and tests help very much in eliminating a large part of it since they assure and reflect the current business rules. Well-written tests could in some cases often be useful as a way of documenting your software. This is especially true for test-driven development methodologies where the idea is that you describe your business rules through a test suite before starting to work on the actual concrete implementation. 

Spring offers us some great infrastructure in conjunction with the JUnit testing framework that we can utilize to write tests more easily. There are discussions about how much of the code should actually be covered with tests and what should be the ratios between written unit, integration, and end-to-end tests. For business and enterprise applications written in Spring, and when testing the backend specifically, I find integration tests to be very useful since they test complete backend flows, different component interactions, setups, and persistence layers all in one.

If we wrote our code as mentioned above, and that is in a manner that as much of the business logic is decoupled from Spring then we can use integration testing for complete flows and only write unit tests for our business-oriented classes. This way we make the code and the edge cases much easier to identify and test. From my past experience, the code that is very easy to test is often a very good and clean code. In this same sense, tests can also be used to identify code smells and flaws in your solution/code design. 

Here’s an example of a simple integration test for the REST API endpoint that performs a search on a Greeting resource. We will use MockMvc infrastructure provided by Spring with its default auto-configuration for easy HTTP request and web context setup, plus @SpringBootTest annotation responsible for the application context set up to simulate a real Spring application environment in which the test runs. 

@SpringBootTest
    @AutoConfigureMockMvc
    public class GreetingControllerTest {
        private final MockMvc mockMvc;
        private final ObjectMapper objectMapper;
    
        @Autowired
        public GreetingControllerTest(final MockMvc mockMvc,
                                      final ObjectMapper objectMapper) {
            this.mockMvc = mockMvc;
            this.objectMapper = objectMapper;
        }
    
        @Test
        public void searchGreetingsExpectStartsWithMatchTest() throws Exception {
            final int expectedResultsSize = 1;
            final String expectedFoundGreeting = "Pozdrav";
    
            final MvcResult mvcResult = mockMvc
                    .perform(
                            get("/greetings/search")
                                    .param("q", "poz")
                    )
                    .andExpect(status().isOk())
                    .andReturn();
    
            final String responseBodyJson = mvcResult.getResponse().getContentAsString();
            final List<Greeting> greetingList = objectMapper.readValue(responseBodyJson, new TypeReference<List<Greeting>>() {});
            final String greetingExpression = greetingList.get(0).getExpression();
    
            Assertions.assertEquals(expectedResultsSize, greetingList.size());
            Assertions.assertEquals(expectedFoundGreeting, greetingExpression);
        }
    }

Finally, we as developers should maintain the discipline to write the tests as much as possible, because in the long run, it will be very much worth it and if your management does not understand that, you should make them aware and not settle for the status quo.

Conclusion

To sum things up, we covered a series of wide topics where each one of them could make for a separate blog post easily. We made a high-level overview of some common software development problems, good solution practices, and specifically what the Spring framework has to offer to aid us on the journey. To summarize it in a couple of points, we can state the following.

– Prefer constructor dependency injection for cleaner and safer class design.

– Prefer more modern bean registration methods such as declarative annotations or configuration factory methods over traditional XML configuration.

– Use bean types wisely, they can be a powerful tool.

– Always think about software components and modules as coherent and cohesive units which should be separated by a clean interface.

– Keep your configuration and environment variables externalized and preferably centralized instead of hardcoding values into source code, always hide sensitive data.

– Design your classes with cohesion in mind and don’t make them just data carriers (structs), but rather focus on proper encapsulation and capturing internal behavior within the same unit.

– Try to separate your business logic from any concrete library/framework implementations and rely on the framework only in infrastructure sense as much as possible.

– Centralize and generalize your exception handling logic in a common code gate for easier maintenance.

– Keep your application ports and adapters, such as controller and repository layers, clean of business logic and let them handle the technical communication specifics.

– Use auto configurations for more productivity, they are your friend.

– Never neglect transactional boundaries and data consistency, data is fundamental to any application.

– Always be rigorous about constraining user input data so that you can develop a clean code model for handling the data downstream and try to rely on declarative programming models for better decoupling.

– Automated tests are a part of the product and not a “nice to have”.

Next

Blog

Extending Spring Boot apps with plugins

Company

Our people really love it here

How it all started

Est. in 2014., gathering eight employees with eyes set on the future. No matter how set they were, they couldn’t predict the success and extent of growth that would ensue. Today there are more than 100 of us, and people are here to stay.

Stability in unstable times

The turmoil of 2020 caused great inconvenience for people all over the world. However, this did not affect our business. Quite the opposite — we not only kept all jobs and salaries intact, but we also grew in size. And we keep expanding. 

Contact

We’d love to hear from you