Thursday, July 17, 2014

#NoEstimates and Travel

Today I tweeted:

1970's: Trips require $$ estimate & traveler checks
2014: Bring a credit debit card & only convert what I use
#NoEstimates #ContinuousDelivery


The idea I was looking at is pretty simple, when you have continuous flow you don't need to predict. It changes your behavior and allows you to to hit your mark exactly instead of having a lots of leftover money later you need to re-exchange.

Batch vs Continuous Flow

To me this echoed trends I've experienced in software development. As I've been able to approach a more continuous flow I have spent less energy, effort & time predicting what I'll do. I simply exchange work for features as I need them.

Unfortunately, there is still a lot of room for misinterpretation on twitter. So I wanted to clear up some misunderstandings.

An example would be useful right about now:


Let us look at 3 sample situation (number adjusted for inflation)


Robert (1970's)

Let's say that in 1970's Robert goes to Europe. Let's look at the numbers
Bank Account: $20,000
Traveler checks: $2,000
Expenses in Europe: $2,500


Marsha (2014)

Let's compare this to 2014 when Marsha decides to travel.
Bank Account: $4,000
Payment method: Debit Card
Expenses in Europe: $3,000

Abundance vs Flow

One misconception is that this means only works if you are rich and will never run out of money. However this clearly isn't the case with Robert & Marsha. Robert has much more money than Marsha, however he's in trouble, stuck in Europe with no access to money. He will tell you how he should have planned better. Maybe he didn't take resturants into account properly or taxis or souvenirs. All of this could have been avoided if only he spent more time planning and had better estimates.

But Marsha didn't have these issues. She had less money and she didn't plan her expenses at all. This doesn't mean she didn't have expenses or a limit of how much she could spend it simply means she didn't have to go to bank ahead of time and say "I would like $3,000 worth of travelers checks for my trip" 

This is a pretty well understood idea in the financial world. It is the difference between Liquidity &  solvency. Both matter.

If you are insolvent it doesn't matter how good you are at estimating, you are broke.
If you have high liquidity it also doesn't matter how good you are at estimating, you have access to you money.


Estimations vs Awareness

Estimation is the ability to predicted what I do. Robert needs this to travel in the 1970's, Marsha doesn't need it in 2014. 
However, this does not mean Marsha can just go all around Europe spending money like crazy. She needs to be aware of how much she is spending. She need to understand the conversion rates and know when she is spending too much. 
Her decisions matter
She needs to be aware but that doesn't mean she needs to be able to predict.


Making Smart Decisions

Marsha spent more than Robert. 
Does that mean she spent her money better or worse?
It's impossible to tell from the budget alone. Making smart choices is important. 
Sometimes people believe that - 
 estimates are the only way you can make smart choices

This is crazy. 
People make great decisions everyday without estimates.
Of course there are also people who know exactly what is going to happen and do it anyways :-|

Dieting

When people are trying to lose weight they spend a lot of time planning meals and counting calories. They will tell you about how important that is, but people who are thin and have been thin there whole lives don't tend to do this. They just eat sensible, they know when they are full and stop eating. 

I must admit I have a bit of resentment today these people as I have never been one of them myself. This jealousy helps me to remember and sympathize with people who are struggling with estimates and refuse to believe they are not needed or a poor use of time and energy.

Travel Tips

If you are traveling in the 1970's here are some travel tips. When you enter a country the immigration officer will check that you have enough money for the trip, one way to do this is take out $1,000 in traveler checks and say that you lost them. Then you will get issued an other $1,000 in checks. Be careful to mark the original checks because if you spend them you will get charged with fraud but ...
wait, where are you going?
 ... these are helpful tips 
... oh you don't live in the 1970's?

Well, maybe you'd like to hear some better ways of estimating?

Monday, June 30, 2014

Llewellyn’s strong-style pairing

Arlo Belshee once labeled my particular style of Pair Programming "Llewellyn's strong-style pairing"

The golden rule for this style of pairing is:

"For an idea to go from your head into the computer 
it MUST go through someone else's hands"

The Process

This style is actually very close to an actual navigator / driver situation in a car or on a boat. With all the high level commands coming from the navigator and the lower level implementation happening from the driver.

This style of programming is all about increasing communication and collaboration. Verbally communicating code and editor commands is a skill like anything else but it is one that many people have not developed yet. Don't worry, it's pretty easy to gain and most people pick it up the basics in a few hours.


The Driver

As a driver (at the keyboard) this is a fairly simple and peaceful place to be. I have been the driver in languages and editors I was completely unfamiliar with without problem or indecent. However, there are two important things that are required for a happy driver experience.

"Trust your navigator"
When you are the driver trust that your navigator knows what they are telling you. If you don't understand what they are telling you ask questions, but if you don't understand why they are telling you something don't worry about it until you've finished the method or section of code. The right time to discuss and challenge design decisions is after the solution is out of the navigators head or when the navigator is confused and unable to navigate.

"Become comfortable working with incomplete understanding"
Even if you trust the navigator, the navigator might not be comfortable navigating you in this strong style. To combat this they might try to explain everything to you before you can actually start to code. This slows things down a lot and depending on your level of knowledge can taking hours or even days before you can even start. Don't worry about knowing everything, you will learn as you go. Don't worry about not knowing stuff. You might not know the language, os, editor, code, or even the problem space you are working in. It's ok, you will soon.


"What if I have an idea I want to implement?"
Great! Switch places and become the navigator.


The Navigator

As the navigator you have two main jobs.

  1. Give the next instruction to the driver the instant they are ready to implement it
  2. Talk in the highest level of abstraction the driver can understand
The navigator is essentially managing a ToDo list in their heads. As you go through the code you keep adding things to the list and as the driver is typing you keep removed the completed items from the list. The key is while you are handeling and tracking 2-20 things at a time, the driver is only ever responsible for 1 thing. This is the key to letting the driver stay in a state of flow. This is the number one main job for the navigator; Manage the big picture details so the driver can stay focused on the code they are typing. 

The second job of the navigator is to speak at the highest level of abstraction that the driver is able to digest at the moment. This will change by driver, of course, but it will also change during the day for the same driver. 
For example, a navigator might tell the the driver "extract that method" to which they might get a blank stare. The navigator should then restate that intention at a low level of abstraction. Such as "ctrl+alt+m" or even "highlight lines 14-20 and then press ctrl+alt+m". After doing that a few times the navigator should be able to revert back to the higher level of "extract that method" but always realize that later the driver might be tired or just plain forgetful, adjust accordingly. 

It is the navigators responsibility to communicate in a meaningful way. This means you shouldn't be speaking above the drivers understanding. However, it is also the navigators responsibility to be ever increasing the the level of communication and understanding. Stay mindful of the driver and be constantly adjusting to them.

"Just do it"
Although you are the navigator, you are going to make mistakes and bad decisions. Correct them when you do but don't sit around and plan to try to avoid.  When someone has an idea have them take the navigator role and go. once it's out look back with the benefit of  hindsight and refactor or redo it so it's better.

The Mouse

When there are just two people, I find that often the navigator will end up with their hand on the mouse. This tend to work well when the activity is much more keyboard focused (like programing). It is horrible if you are doing something more mouse focused such as graphics or installs. 

Common problems with pairing 

The main reason I use this method of pairing is it solves many of the common problems I see occurring with pair programming. Here are a few of the problems and how this style of pairing addresses them.


1 person working and 1 watching

If your navigator isn't engaged then you really are not even pairing. Traditional pairing makes it unfortunately easy for the person not at the keyboard to zone out and disengage. This is literally impossible with the strong pairing style as the only way for that to happen is for both people to be doing nothing.

Fighting over the keyboard

Different people might have different ideas on how to implement something. While this still happens with strong pairing style it changes from wanting to grab the keyboard and shut out the other person to wanting to let go of the keyboard and communicate your idea to the other person. 

Expert/Novice Pairing

One part of pairing that is often highlighted as a problem is the Expert/Novice combination.  You can imagine that watching someone fly through pages of code when you are unfamiliar with the language, editor or even the business domain would be hard enough to follow let alone contribute. With the strong style of pairing, however, this because an very valuable combination to both the expert and the novice. In this style, the expert almost always is in the navigator position having everything go through the novice. This is a very intense way of immersion learning. I am often surprised to find how quickly I can achieve basic fluency in a language or business domain, usually within a few hours or days of being the driver.  
But what about the Expert? The expert also benefits. First they are not slowed down by the presence of the novice. This is a big deal as it can be the main issue that prevents the two working together and also prevents them eventually becoming equals. The expert also benefits from being able to think at a higher level in the role of the navigator. Just as it is harder to look at a map and keep your eyes on the road as you drive a car, it is equally hard to look at the big picture problem you are solving and type the individual letters needed make compiling code. Finally, the expert benefits greatly from interacting with the 'beginner mindset'. You might have heard the saying "to really learn something teach it" and some of my most profound insights have come when a novice has simply asked me "why?"

What are you thinking?

If the thinking is happening in the same person that is typing then they usually are not talking.  This means that you are suppose to guess what they are thinking through some sort of reverse engineering of the code that is appearing on the screen. By making the navigator speak out loud you help to ensure that everyone is on the same page. However, this is only the start of the benefit to talking out loud. A different part of the brain is used to talk than type. When you say something out loud a small amount of improvement occurs between your brain and your mouth (I often wish it was greater :-) There is an other level of autocorrect that occurs when it is received by the other person (I frequently find myself thanking my driver for understanding what I meant instead of what I said) but there is also a sort of 'stupid trap' where the very bad or unformed ideas are not able to bridge the gap between the navigator and driver and get flushed out because of it. Finally there is a checksum when the navigator sees the code end up on the screen. Because the navigator now has an expectation of what 'should' be appearing they are in a position to actually review that that it is what was wanted.

Just in time thinking

When the navigator gives a direction they might not know the full path of where to go. In fact, the vast majority of the time I navigate I do not know the path or even the final destination only the next step. While the driver is implementing the current step I have time to figure out the next step. In this manner the navigator can stay one step ahead of the driver the whole time. 

Side Note: While this is very practical for production coding, it can sometimes lead to the misconception that the navigator knew what they were doing all the time but just didn't want to share it with the driver ahead of time.

Final thoughts

This method of pairing has been very powerful for me and is the main way I have been pair programming for the last 11 years. It is very effective, fast and efficient.However, it can be very 'strong' as the name implies. Here are a few helpful tips I have picked up over the years to help soften it.

'ask for trust'

Sometimes the navigator and the driver will butt heads. I have often made the mistake of trying to argue the merits of my ideas when the actual problem is that the driver does not trust me. If you don't trust someone all the logic in the world isn't going to help. A simple solution to this is simply to ask for a temporary window of trust. For example: 
"I know this seems wrong, but could you please trust me for 4 minutes 
and then we will talk about the solution and remove it if you aren't happy"
After you get the code out there, you both will be in a place where you can now talk about the merits of it. Of course this doesn't mean you are right, you might find yourself deleting the code you just wrote but it is faster to ask for trust, program for 4 minutes and then delete it than to argue for 30 minutes to avoid ever having to write it the wrong way.

Take breaks to reflect and explain

Pomodoro's  (25 minutes of code then 5 minutes walking around the block talking) can be a great way to lower the intensity of this style of pairing while increasing the shared insights and learning. 
More importantly, always keep an eye out for the driver to make sure they get proper time to understand the high level picture as well. After you finish a section it can be good to stop and retrospect what just happened and ask questions.
If you are in a situation where the driver/navigator roles don't seem to change very often, it can also be good to force the roles to switch. In the short term this will slow production but it will make for a more well rounded team and have profound long term advantages.













Sunday, June 8, 2014

Duplication and Cohesion

A lot of the conversation around removing duplication goes something along the lines:
"remove duplication"
The idea is there is a a scale and your code should be on the low duplication side of it.


Unfortunately, this is only half the story. There is a hidden, often forgotten, label that belongs on this scale: Cohesion


The rule you want to follow is this:

Choose the lowest duplication for the natural cohesion

Examples of low cohesion

My last name is 'Falco'. My sister's last name is also 'Falco'. This is duplication, but there is no cohesion between the names. If my sister marries and changes her name, my name should not change as well.

Because of the low cohesion between our two names, it should have high duplication.

Many unit test scenarios fall into this area, which is one reason you might end up with higher duplication in your test code.

Examples of high cohesion

An advantage of working in many PHP systems is that no matter what I do to a page the worst I can do is mess up that single page. PHP systems tend to have high duplication. The down side of this is when something changes, for example: how sales tax is calculated on a page.  Sales tax has high cohesion, I don't want it to be different for each page. This means that when I need to change it I have to go to all the different pages that implement it and change them. The biggest problem is that most likely means I will forget one of the pages.

Bob Martin has a nice blog on how to separate parts to remove duplication when there is high duplication but low Cohesion.

Detecting Cohesion via Source Control

Think about the the follow example:
You are looking at your source control and discover that in January five files all changed in a single checkin. Those same 5 files all changed together in February as well.
March - Those 5 files changed together
April - Those 5 files changed together
May - Only 4 files changed together
June -  Those 5 files changed together

What happened in May?
Yes, someone introduced a bug. 

Think about that from a bug detection point of view. You didn't test, you didn't even know what the expected user behavior was,  you certainly didn't look at the code. You just detected a natural cohesion and noted that a change was missed in 1 place. This is why you want the minimal duplication for the cohesion that is naturally existing. If everything thing that needs to change together is only in 1 place, you can not forget the other places.


TL&DR; If you have things that need to change together duplicated you have too much duplication. However, if you've removed duplication on similar things that don't change together you will have problems when you change one of them.

Do you favor Removing Duplication or Expressive Code?

Some time ago Kent Beck observed that there was a pattern for simple design


  1. Runs all the tests
  2. No duplication
  3. Expressess developer intent
  4. Minimizes the number of classes and methods
There has been some debate over the ordering of #2 & #3. Of course, this is just a observation of a pattern ordered by preference, personal preference, so it makes sense that you might have your own preference. But...
"What is your personal preference?"

It seems like that should be easy enough to determine, just ask yourself, but here's the rub:
cognitive scientist & economist have long observed that there is a great difference between 

Stated values:  what we say.
Demonstrated values: what we actually do.   

If we just ask people we will only get the stated values, to get what they actually preference we need to see what they do in action. For that we need a test.


Removing Duplication Test

Take a look at the following:
Let's remove the duplication. Just circle the duplication that you would like to remove then scroll down to see what your individual preferences are.










Answer #1

If your answer looked like this:


You would probably written a piece of code like this:
    iLike("fishing", "biking", "playing", "swimming", "sleeping");

and you have demonstrated a preference for:  #2 Expressive Code over #3 No Duplication

Answer #2

If your answer looked like this:


You would probably written a piece of code like this:
    iLike("fish""bik""play""swimm""sleep");

and you have demonstrated a preference for:  #2 No Duplication over #3  Expressive Code



Which did you preference?
  
pollcode.com free polls 



Tuesday, May 27, 2014

What's the impact of how long your tests take?


Preface: There has been a lot of talk lately about whether or not Test First is Dead.This is a complicated and tangled issue, so I want to simplify many aspects of this debate in this blog post to focus on a single aspect of TDD: 
Test Run Time.
So for this blog, let's ignore the distinction between unit, integration, acceptance tests, even manual tests. I'll lump them all together into a single box - 'tests' so we can focus on the 2nd benefit of Tests: Feedback


How fast is fast enough for you tests to run? There is a lot of discussion. minutes? seconds? fractions of a second?
Usually there is a very important step skipped in this discussion. People never seem to ask:

What does it matter how long they take?

Cycle Time

Have you ever needed to make 'just a small fix'? You change a line of code. Compile, save, deploy (maybe to your phone or websever) look at result. 

"Damn." That didn't actually fix it. You change the line again, Compile, save, deploy, look at the result... "Damn"

4-5 tries later and it's fixed. less than 20 characters typed. Less than 60 seconds of work and yet the clock says it's an hour later than you started.

What is going on here?

Again a lean graph is helpful in mapping out and understanding this system. 

  The problem is for every 10 seconds of work you do, you have to wait 10 minutes to get your feedback. This is a highly infective way of working, which is why we don't do it that offend. 

Although this phoneme is well know enough to inspire xkcd comics
 xkcd compiling comic

To discuss this part it is better to use the repeat notation to display the above lean graph

Work to Wait Ratio

Now, we tend to want high feedback but we also want a good work to wait ratio. I find that in practice we want at least a 3:1 work to wait ratio.


That means if it takes you 10 minutes to get feedback (test it out) then you will probably end up coding in 30 minute or longer chunks before running the code. If compiling and running takes you 3 minutes you'll work in at least 9 minute chunks.

This allows a lot of time for mistakes to fester and complexity to compound.

A lot of good Test Driven Developers I know like to run their test's at least 3 times a minute. That means 20 seconds per cycle, which means an absolute max of 5 seconds per run to achieve the minimum work to wait ratio.

But this changes it get to the 'very very quick' stage. If you are typing in word it underlines typpos  as you go. This might seem like a minor connivence over the 'Check Spelling' feature but, then again, when is the last time you remember running a spell check manually? 

It worth noting I know many legacy systems where the compile time alone can take 3 hours. That means if I come back from lunch and program for 1 hour, I will not be able to see the effects of my changes before I go home for the night.

Discovery 

One last thing to consider is the aspect of discovery when creating. When you are trying to be creative and discover 'the right thing' then the immediacy of feedback is very very important. This is why many people find themselves preferring things like paper, whiteboards,  wysiwyg editors when they are brainstorming. 

This is part of why you are seeing more things like continuous test runners and website monitors that automatically refresh on file change. 

You'll find that the longer the cycle times the more deliberate each action becomes. This drastically lowers the chance for a joyful discovery. Not many people played around with punchcards.

If you are interested in this aspect of feedback, I suggest you check out this talk [inventing on a principle] by Bret Victor. Who has though about this more than anyone else I know of.  





Monday, May 19, 2014

Test First vs Test After

There has been a lot of talk lately about whether or not Test First is Dead.
This is a complicated and tangled issue, so I want to simplify many aspects of this debate in this blog post to focus on a single aspect of TDD: time.

So for this blog, let's ignore the distinction between unit, integration, acceptance tests. I'll lump them all together into a single box  - 'tests'.

Let's focus on the speed aspects of Test First vs Test After.

Test After

To start, we'll look at the time it takes to write code using test after.
I'm going to use a lean graph to help map the differences between 'actual work' and 'non-work'. I get paid to write code, not tests, so I'll map code to work and tests to 'non-work'.
In this example, it took me 60 minutes to code a feature. Then more time to write the tests for that feature.

Now for the key question:

"Is it faster to not write the tests?"

In other words, which is true:
 a)  60  < 60 + X
 b)  60  > 60 + X

This is obviously  a)  60  < 60 + X  regardless of how long or quickly you can write the tests.
This doesn't mean that in the long term it isn't faster to write tests, but it does mean in the short term, if you write tests after, it will always take you longer.

so what about...

Test First

When you have a test before you start it can make it easier for you to write the code. This is because the Tests can aid with Specification & Feedback of the task at hand. This saves some time. Again, we'll assume for this example that it saves me 30 minutes while writing the code.

Now for the key question:

"Is it faster to not write the tests?"

In other words, which is true:
 a)  60  < 30 + X
 b)  60  > 30 + X

This is a more difficult question. It depends on how long it took to write the tests (what the value of X is). If X is less than 30 minutes it is faster. If X is longer than 30 minutes it is slower.

This means that the techniques, technologies & fluency of the programmer all affect the result of this equation. So it is not a clear cut answer & it will vary depending on the individual.


Conclusion

For me, it is often quicker to write the test. I have the advantage of years of fluency and tools like approval tests. However, the interesting point to me is while Test After means:  writing tests takes more time

Test  First means: writing tests may or may not save you time


I believe this possibility is worth exploring.








Wednesday, January 30, 2013

Intention and Indebt


Today I have been reading a lot about technical debt, and I keep seeing terms like
“… at this point they choose to take on some technical debt…”

I have never seen a programmer make this choice.

Which is not to say I have never seen a programmer take on technical debt.
Allow me to explain with another metaphor:

Weight.

I have never said, “I think I will eat this piece of cake to gain some weight”
I have eaten cake. I have not watched what I ate. I have not exercised afterward. As a result I have gained weight. But it was never an intentional choice. It was just the side effects my actions. It was NOT intentional.

Likewise, I have seen programmers:
  • Use bad names
  • Add lines to long methods
  • Add an additional If block
  • Add more methods to large classes
  • Comment out a section of code
  • Skip writing tests
  • Skip refactoring
  • Ignore extracting a common interface


But I have never seen a programmer  “take out a loan”. 

Technical debt isn’t like a mortgage, a large decision you make with a bunch of thought.  It’s more like a credit card or a bar tab. You just keep coding and coding, little by little, and then one day you realize you have a large amount of debt.

(Side note: I have quite often seen people realize that they are in debt and then actively decide not to pay off that debt; You could argue that this is a form of "intention" but deciding to keep your debt is not even close to deciding to take on debt.)

On the flip side, I see quite a lot of intention by people who say fit and slim. They tend to actively watch what they eat both quantity of food and type of food. They make a habit of exercise. As a result they tend to think that everyone else does it to. After all, you choose to eat that ice cream, right? Yes, but I didn’t choose to get fatter, I actually didn’t consider my weight at all when I ate that ice cream.

And that’s the point, when we analyze why people choose technical debt, we tend to be missing that point that the vast majority of people DON’T CHOOSE technical debt, it is a side effect of their actions, but never part of the decision.