Monday, October 21, 2013

Small cause

Today's modern software development is very different from the low-level programming you had to endure in the early days of computing. Computer science invented concepts like "high-level programming languages" that abstract a lot of the menial tasks away that are necessary to perform the desired functionality. Examples of these abstractions are register allocation, memory management and control structures. But even if we step into programming at a "high level", we build whole cities of skyscrapers (metaphorically speaking) on top of those foundation layers. And still, if we make errors in the lowest building blocks of our buildings, they will come crashing down like a tower in a game of jenga.



This story illustrates one such error in a small and low building block that has widespread and expensive effects. And while the problem might seem perfectly avoidable in isolation, in reality, there are no silver bullets that will solve those problems completely and forever.




The story is altered in some aspects to protect the innocent, but the core message wasn't changed. I also want to point out that our company isn't connected to this story in any way other than being able to retell it.



THE SETTING



Back in the days of unreliable and slow dial-up internet connections, there was a big company-wide software system that should connect all clients, desktop computers and notebooks likewise, with a central server that should publish important news and documentation updates. The company developing and using the system had numerous field workers that connected from the hotel room somewhere near the next customer, let the updates run and disconnected again. They relied on extensive internal documentation that had to be up-to-date when working offline at the customer's site.



Because the documentation was too big to be transferred completely after each change, but not partitioned enough to use a standard tool like rsync, the company developed a custom solution written in Java. A central server kept track of every client and all updates that needed to be delivered. This was done using a Set, more specifically a HashSet for each client. Imagine a HashMap (or Dictionary) with only a key, but no payload value. The Set's entries were the update command objects itself. With this construct, whenever a client connected, the server could just iterate the corresponding Set and execute every object in it. After the execution had succeeded, the object was removed from the Set.



GOING LIVE



The system went live and worked instantly. All clients were served their updates and the computation power requirements for the central server were low because only few decisions needed to be made. The internet connection was the bottleneck, as expected. But it soon turned out to be too much of a bottleneck. More and more clients didn't get all their updates. Some were provided with the most recent updates, but lacked older ones. Others only got the older ones.



The administrators asked for a bigger line and got it installed. The problems thinned out for a while, but soon returned as strong as ever. It wasn't a problem of raw bandwidth apparently. The developers had a look at their data structure and discovered that a HashSet wouldn't guarantee the order of traversal, so that old and new updates could easily get mixed up. But that shouldn't be a problem because once the updates were delivered, they would be removed from the Set. And all updates had to be delivered anyways, regardless of age.



GOING DOWN



Then the central server instance stopped working with an OutOfMemoryError. The heap space of the Java virtual machine was used up by update command objects, sitting in their HashSets waiting to be executed and removed. It was clear that there were far too many update command objects to come up with a reasonable explanation. The part of the system that generated the update commands was reviewed and double-checked. No errors related to the problem at hand were found.



The next step was a review of the algorithm for iterating, executing and removing the update commands. And there, right in the update command class, the cause was found: THE UPDATE COMMAND OBJECTS CALCULATED THEIR HASHCODE VALUE BASED ON THEIR DATA FIELDS, INCLUDING THE COMMAND'S EXECUTION DATE. Every time the update command was executed, this date field was updated to the most recent value. This caused the hashcode value of the object to change. And this had the effect that the update command object couldn't be removed from the Set because the HashSet implementation relies on the hashcode value to find its objects. You could ask the Set if the object was still contained and it would answer "no", but still include it into each loop over the content.



THE CAUSE



The Sets with update commands for the clients always grew in size, because once a update command object was executed, it couldn't be removed but appeared absent. Whenever a client connected, it got served all update commands since the beginning, over and over again in semi-random order. This explained why sometimes, the most recent updates were delivered, but older ones were still missing. It also explained why the bandwidth was never enough and all clients lacked updates sooner or later.



The cost of this excessive update orgy was substantial: numerous clients had leeched all (useless) data they could get until the connection was cut, day for day, over expensive long-distance calls. Yet, they lacked crucial updates that caused additional harm and chaos. And all this damage could be tracked down to a simple programming error:



NEVER INCLUDE MUTABLE DATA INTO THE CALCULATION OF A HASHKEY.
Full Post

No comments:

Post a Comment