Apr 11 2020

Name that Algorithm: #2

The veterans in the software industry recognize code as a liability. Code is not an asset! Maintaining 10 lines of code is easier than maintaining 100 lines of code. Similarly, 100 lines of code is easier to maintain than 1000. By reducing the size of the code, we can squeeze bugs out of their hiding place.

Consider a massive function with a reduce buried in the middle.

This is a tiny reduce, but there is still a cognitive complexity. We need to stop and think about what is being done in a procedural manner. I argue that this is not truely functional despite the use of reduce. There is obviously not a whole lot of function composition happening. Even if you can argue that it really is “functional”, it’s functional with bad taste. It’s okay though, I can rip on this code because I’m the one who wrote it ;).

Some improvements that can be made:

  1. Improve the variable naming.
  2. Find a function to better describe the job.

The above two steps should implicitly lead us to function composition. As for the code specifically, it looks like a lookup table is being created. This allows for a constant lookup time on books by an author. Let’s start by moving the function out into it’s own named function.

This relocation reduces the cognitive complexity in getSomeValue. We no longer need to toil about what the reduce is doing. We can clearly seeing that books are being indexed by author. Next, let’s focus on finding a better Lodash function. Searching through the docs, I find _.keyBy promising.

We were able to take a function of four lines and turn it into a function of one line. This is a savings of 75%. The solution is also much more declarative – reducing much of the cognitive complexity.

Let’s say the codebase grows. If we have ten resources each with an indexed attribute, we could cut forty lines into ten lines. This compounds to further savings as abstractions are made when more logic is added.

For instance, maybe we want to filter out null authorIds from indexBooksByAuthor. We could create a function pipeline. It would begin by containing the keyBy as well as checking for undefined values. Then we can leverage this pipeline throughout all ten resources. How do we find these ten resources? Thankfully we renamed these functions to something useful! All we would do is fuzzy search for indexBy and all occurances would appear like magic!

Now imagine a world where we did not do the refactor – we would have had ten sloppy reduce functions. Imagine the pain of tracking each occurance down. This creates a large risk missing some reduces. Additionally, this hunt would have to occur whenever a change needs to be made. Need to add a null check? Time to look for all 10 of these occurances again. By not abstracting the function, technical debt surely compounds and will inevitably leave lots of room for bugs.

Hiding complexity in well-named functions gives little victories in readability and searchability. In turn, these little actions compound into a more maintainable system.