Working with go projects using Bazel. Builing in and out of a container.
Prefer technical health over technical debt as a metaphor for developer practice changes for better code.
Run production on a workstation or embrace a new way of working…
Orchestration and Choreography are often confused. This is how I think of them.
All change carries an element of risk but not all changes are equal.
How do projects get to a million lines of code.
The Builder pattern has become very popular over the last few years but there is a growning tendency to use it everywhere. Here are some of the problems and alternatives that you might find a better fit.
What can your commit history tell you about the health of your project?
Keeping a domain model is hard. Implementing a anti-corruption layer with the right separation of concerns can help.
Logging - one of the most crucial aspects of any system. But how well is your logging tested?
A clean/DRY way to style content based on model data
Another look at a classic OO pattern
Value types are an oven often overlooked OO and DDD technique. Here is why I think they are an undervalued technique
Working with small teams is a lot of fun and I find it fairly easy to keep track of what is happening with version control and build systems. Errors and failures don’t come up that often and when they do they can quite often get solved there and then. On larger projects or working in a large organisation it’s impossible to keep track of everything. There are too many moving parts and changes. Incidents are more frequent and their impact much larger. A broken build or build system can affect 10s or 100s of people. For these larger development projects I find I have to collect and chart data, looking for tends and anomalies and then delve deeper into the data if and systems if there are problems.
How can we get the benefits of IoC containers at a higher level - the services that we deploy into environments? Typical service tiers are implemented with a fronting load balancer that allocates servers to satisfy requests. Applications or services that need these services are give domain or IP information of the load balancers and the load balancers are given details of the servers running instances of the services that they need. Essentially each system with dependencies news an instance of its dependencies. Inverting this dependency requires a container but it leads to some interesting advantages.
In his keynote ‘This is water’ Neal Ford talks about “Yesterday’s best practices become tomorrow’s anti-patterns”. This seems to be the case for application servers like WebSphere, WebLogic and JBoss.
Browsing some blogs recently I came across some interesting C++11 lambda code and thought it would be cool to try it out for myself. The idea of spinning up a project or even putting a new test in an existing project did not feel right. Wouldn’t a C++ REPL be useful for just these occasions?
I have been spending time recently writing command line apps in C++11. Each time I wanted a way of handling command line arguments flexibly. I chose to use the boost::program_options library. The documentation is pretty good but there are some assumptions (aliased namespace) and the example code is broken up with paragraphs of text explaining what the code does.
I am about to embark on a new Java project so I thought I would take a look at the current state of build tools for Java projects. First I thought I would draw up some assessment criteria to use as a loose guide. This is what I came up with:
I like to commit code frequently when working on a project. I also like to use the command line for building, testing and committing code. So I thought it would be nice to have a way to check the build status from the command line. I have been wanting to write something in C++11 and a small lightweight command line tool seemed like a good opportunity.
Continuous Deployment is the act of automatically deploying an application every time the build is successful. I am currently working with a development team that is working towards continuous deployment as part of their continuous delivery adoption plans. The system involves several internal web services which seemed like a good place to start working on not only automating the deployments but maintaining a very high degree of up-time during those deployments. Automating deployments involves both development and techops groups so I thought I would search for some worked examples that would help illustrate the techniques and steps required. I found several blogs and articles talking about different approaches but no worked examples.
Working on a small program recently I found a quirk in next_permutation. The prorgam read a sequence of characters from the command line and then tried to find words by testing each permuation. For a sequence of 3 characters there are 6 permutations.
I like Emacs. Some people like VI(M) but emacs has been in my toolbox for a long time and I feel very at home working with it.
Just over 20 years ago Jack W. Reeves wrote an article in the C++ Journal entitled “What is Software Design?” and I missed it. Not only that but no one thought to point out that I was missing a very important article. An article that challenged and changed/clarified my mental model of software design and construction - 20 years after it was published. A copy of the original article can be found here. I guess it is better late than never and it demonstrates that some things stay relevant and important. Sometimes they remain controversial.
Removing duplicate code is a great way to improve the internal quality of your application code. Duplications mean that you have more code than you should and are often the source of more subtle bugs of the “I’ve already fixed that ..” variety.
After watching Clang MapReduce – Automatic C++ Refactoring at Google Scale I was struck with the idea that this could help with the upgrade problem. Almost every application uses libraries. Those libraries need to be updated from time but each time they are updated all the code using those libraries also needs to be updated. For development teams finding time to upgrade to the latest libraries against competing functional updates is challenging. What if as part of the release a set of refactoring commands or programs accompanied the libraries. These refactoring scripts would automatically update the consuming application code to use the new libraries saving time and money.
Refactoring is a key practice to improved code hygiene. Making refactoring part of your next project is one thing but if you have just joined a team or project with a significant amount of debt how do you work on making things better? Over the last few months I have been assessing a number of code-bases and speaking about technical debt management. While preparing for these engagements I realized that combining two code and project metrics could be used to help focus efforts on code that would deliver the most benefit. Toxicity is a combined measurement of static code analysis metrics. Volatility a measure of changes made to files within a code-base over time. By combining these two measures we can create a source file scatter chart correlating toxicity against volatility.
oh-my-zsh is framework for managing zsh configuration. The default configuration adds some interesting enhancements.
Small systems grow with success. As these systems grow they often take on more and more functionality either directly into the main system component or into sub-systems. As the systems grow in complexity and responsibility their database requirements grow at a similar rate become more and more complex.
Reverse proxies have been around for a very long time and depending on your application either interesting additions or a key element to your architecture.
During a recent discussion about open source development we wondered how long these projects lasted. In particular if there was a rapid drop off in activity.
I came across this post by Michael Norton and thought I would reference it here: Stabilising Velocity Michael makes some keen observations on both causes and effects of unstable velocity.
If you have already added
gem 'mysql2' to
your Gemfile but get a message saying that it is missing when you try
I was playing about with Capistrano over the weekend. I wanted to automate the deployment of a Rails application to my server. The server was (I thought) just about ready to accept the app but I did not want to go through another manual deployment. I thought I would take the opportunity to script the deployment. The first task I set myself was to create a production MySQL database. Searching for how to do this threw up lots of interesting information about building and deploying database.yml files, but not much about configuring MySQL. The first set of tasks I came up with were:
Progressive Enhancement is a web development technique or pattern. The basic premise is that a web site should be accessible to all users and then to overlay additional functionality based on the client’s capability.
Haml is a markup language for generating xml and other markup - most popularly HTML. Over the past few weeks I have been writing a few Ruby on Rails applications and chose Haml as the templating language. For someone who has traditionally avoided positional languages this was a strange choice. Having written a few simple applications I find that the writing HTML in Haml is both straightforward and intuitive. Good HTML is naturally hierarchical and having this structure both encouraged and enforced in Haml feels right.
Taking a leaf out of the XP book and in particular test driven development I have had some successes in using this idea when doing a presentation. I have dubbed the idea ‘Test driven talks’. The basic idea is to quickly canvas the audience for things that they would like covered during the presentation. Once these have been captured on a whiteboard or flip-chart.
After tweeting that I wanted to have suggestions for blog entries I got a single reply asking for my thoughts on DAO and unit testing – essentially asking should DAO be unit tested.
I started writing this post quite some time ago but never got around to wrapping up the loose ends. The article is really a summary of what I learnt during a pretty intense media web site development project. Since pictures equate to 1000s of words here is my effort to express how technologies can work symbiotically to delivery value that is more than the sum of their parts.
I just came across this post and want to remember it so posing here. http://doublebuffered.com/2009/02/11/optimizing-build-times-for-large-c-projects/. I am most interested in the uplift in compilation time based on unused #includes and build reductions for SSDs.
This is a very interesting article http://queue.acm.org/detail.cfm?id=1814327 which demonstrates that we should not take things for granted - including algorithms that have been around for a long time.
When performance testing a web application (as in raw operations per second) I have seem many people try to benchmark their new system against the agreed performance level right off the bat. The problem with this approach is that most applications need to be tuned to get the most out of them. Optimistically firing off 100s of requests will most likely cause the server to choke and if you are unlucky die in a gibbering heap.
Post/Redirect/Get or Redirect after Post is an HTTP interaction pattern that can be used when developing web applications. I have been mentioning it quite a few times in my consulting work and thought I would take a stab a at diving a little deeper in the pattern and its benefits.
Why do application developers think that adding more functionality is always a good thing for the user? When I first started working on GUI applications (Windows 1.0) we worked very hard to get sub-second response times. Now in theory applications running on multi-core 2+GHz processors should out strip 286 CPUs running in the MHz range. What seems to have append instead is that applications have maintained the same sort of response times (in general) taking 1 or more seconds to perform a task.
Since joining ThoughtWorks in 2004 I have enjoyed Pair Programming and my programming skills have improved significantly. Working on a problem with someone else full time is one of those practices that is difficult to convince people off until they have actually tried it. Laurie Williams at NCSU has done some pretty interesting research into the effectiveness of of pair programming, which is well worth a read.
So after having the server nicely set up and ready for configuration I could not resist just ‘getting on with it’. I had grand plans to plan it out and record everything (good practice) but each step seemed so simple it seemed simplest just to get stuck in.
I have been running my own server for some time now but my requirements have changed and the current underlying VM architecture does not conveniently support what I want to do.
If you have been in software development for a while this might make you chuckle :)
I have to confess that I am an IntelliJ Idea fan. I will confess that the key bindings take a little getting used to - especially on a Mac but I find I am most productive using it.
I have found myself drawing the classic cost of change graph a few times recently so thought I would blog about it. The graph was popular a few year ago in explaining the differences between and eXtream Programming (XP) team cost and a waterfall team cost.
This is my first screencast so all and any comments welcome!!
These ideas were presented as at ThoughtWorks Agile Southeast conference in Atlanta. The idea for these diagrams came about after the first time I spoke about web test driven development at Agile East in Philadelphia and New York. I am striving to show how introducing a level of abstraction affects development costs.
I have been on a number of Java projects recently and one of the things I end up brewing for each of these projects is some enhanced management of runtime properties.
Not that I have seen anything of the city but I now find myself on the west cost after arriving on the east cost just over a week ago.
Over the past year I have been involved in preparing quite a few documents, proposals, presentations etc. I have also been asked to review quite a few of these documents. As an author or reviewer I like test what I am working on against 5 Ws and one H to see if the important aspects have been covered. The Wikipedia article provides a good background and history, including the 1902 reference to Rudyard Kipling Just So Stories.
This post is a little overdue :(
Quotes: Things that inspire and be remembered