In his keynote ‘This is water’ Neal Ford talks about “Yesterday’s best practices become tomorrow’s anti-patterns”. This seems to be the case for application servers like WebSphere, WebLogic and JBoss.
When these containers started to become popular (typically in large enterprises) servers took a long time to commission and configure. Containers were a way of deploying applications more quickly and with greater safety. But time change and today’s patterns and teams favour lighter weight, nimble solutions. Our ability to manage servers and networks through code is one of the key drivers for this change.
2011 seems to have been a tipping point with articles advocating change Stop Wasting Money On WebLogic, WebSphere, And JBoss Application Servers and data indicating that the change was already underway The Death of WebSphere and WebLogic App Servers? New Infographic shows the Rise of OSS Java.
Drivers for change
I think that there are a number of drivers moving IT organisations to use lighter weight frameworks and tools. Particularly open source frameworks and tools. Some of these drivers reasons are technology based but others are more subtle. 15+ years ago we were still trying to understand how to build web based applications. Modern web applications have very little in common with applications built 10 years ago.
This sticker price is most obvious here. Driven by licensing costs organizations are constrained in where and how they can make use of commercial containers. This often leads to severe constraints on non-production environments. Where this was not a problem the effort involved in setting up a pre-production environment was very high. Given the cost teams were encourage to share with other teams leading to conflicts and ‘unusual’ errors caused by conflicting requirements.
We Cannot Measure Productivity but as developers we know when things are easier and more fun. Lighter weight tools like Jetty and Netty make programming more enjoyable. Instead of an environment controlling our applications our applications control the environment.
Firing up a Jetty server is pretty simple
This is a particularly important element of control. The application controls its HTTP(S) interface not the interface controlling the application. I am a big fan of this approach and Jetty has proven itself on a lot of projects.
Even if we cannot measure productivity my experience is that using lighter weight open source tools feel more productive. And deployments are definitely faster and more predictable.
I practice Test Driven Development. Testing code that runs in a container is hard. Very hard. Test coverage is quite often much lower than other environments and it is difficult to get a higher coverage figure without resorting to integrated system tests that involve containers.
Jetty for example makes it possible to test controllers in unit tests without running up a full server. For automated tests that involve network activity servers can easily be started on separate threads.
Deterministic test behavior is critical. Timing loops that wait for external systems to be ‘ready’ make it difficult to be reliably deterministic. Calling a method to start a server is about as deterministic as it gets.
Application containers abstracted environmental differences behind logical consistent interfaces. Applications would ask for resources using a logical reference and if the container was configured correctly it would receive an implementation to use.
I remember spending a lot of time writing application container configuration scripts to ensure that each environment (e.g. dev and qa) were consistently configured. Those scripts were driven from version controlled property files.
Container-less deployments typically take those same property files and deploy them with the application to the same effect.
Being able to scale a web application or service is a key cross functional requirement. A single server just can’t deliver the levels of service and reliability organizations require.
Many applications are statefull. Each request is dependent on the completion of some previous event. Application containers provide state replication between containers on behalf of applications. This means that state does not have to reside in a database or some other external store. This is a very complex problem to solve and it not having to worry application developers with this issue is important.
Current patterns push state out of the web and application tiers down into persistent storage. Being stateless means that we don’t have to worry about servers talking to one another - there are limits to how many servers can be accommodated in a distributed state model.
Servers and applications need to be monitored to make sure that they are healthy and delivering the expected levels of service. Having a consistent interface to query makes operational controls easier. Application servers naturally provide consistency that are agnostic to the applications they run.
The abstractions provided by the container are similar to monitoring facades used for container-less applications. The monitoring systems need to be able to use a consistent interface. It now become the application’s responsibility to provide that interface.
Fortunately monitoring has shifted from proprietary interfaces to more open standards (typically HTTP based). Lightweight frameworks like Dropwizard generate warnings if monitoring interfaces are not implemented. The framework provides components that make implementing these monitoring interfaces pretty easy to implement.
So what are the alternatives to running applications in an application container? Here are some popular and robust options that immediately come to mind. There are many more:
The container-less options are very compelling. Key for any technology in my opinion is the community driving its adoption. Commercial or open source a strong community is a good indicator that developers enjoy working with the technology and are keen for others to share their enjoyment.