Tuesday, November 19, 2019

Thoughts on CodeRetreat

Last weekend I‌ attended a Code retreat as part of global day of code retreat. I‌ love code retreat. I’ve been attending since 2010. I’ve organized and even facilitated some. But I‌ was rather disappointed in this one. I‌ wanted to write down the issues I‌ saw, not as a criticism of code retreat, but in the hopes of bringing back the original feel of the event and to hopefully grow more of them. As such, I‌’m using each issue to better understand what I dislike and propose a change that will improve the experience.

What it is

CodeRetreat is a full day of practice. It focuses on TDD‌ and design and you do the same problem over and over again in 40 minute bursts. Often there are small variations to help explore new areas of learning. There is often a small retrospective in between each session.

Issues:

Pairing

To be clear, I am a HUGE fan of pair programming. However, there is a skill to pair programming and many at a code retreat have never practiced it before. Having the 1st session default to the style where the driver (person at the keyboard) is typing and thinking tends to lead to a session where the navigator (the person not at the keyboard) is watching. It makes it hard to rotate, and discourages tring new languages.
Solution: I would have the 1st session be strong style pairing. The difference is that the person at the keyboard “is not thinking”. This makes it easy to work in a new language (you just type for the other person). It maximizes communication and helps people to connect. After the 1st session. I‌ would introduce the traditional style of pairing and open it up as to which they prefer.

Retros

Retros are a good way to learn from experience, but there are two issues with the current style. The first is we don’t really have a shared experience, so listening to each other talk full of misunderstandings. This is amplified by the deletion of code (more about that later). So that it was hard to get value out of the retros and they felt more like stand up status meetings.
Solution: I‌ would advise a two phased retro. 1st the teams spend 5-10 minutes retroing their experience by themselves, then they choose the best insight and do a group share of a single insight. I would also include in this group share the language and number of tests written.
Example group share: “We worked in java & wrote 9 tests. We were surprised by how hard it was to work around passing back booleans until we changed to passing in the if/else blocks to the method.”

Deleting Code

Code retreat has a rule that you delete your code at the end of a session. This is a rule that gets violated all the time, but has a good intention. It’s designed to make it safe to experiment. I believe in that safety. But I‌ also believe in the ability to review and learn from the code you wrote. Deleting it prevents reviewing and sharing.
Solution: First, wait until after the retro to delete the code. Second, make the deleting optional to the group, but allow anyone in the group to have the option to delete the code. In other words, unless it is unanimous to not delete the code, the code gets deleted.

Mobbing

I’m surprised there isn’t at least one session of mobbing and a option for mobbing on most sessions. Although, I‌ wouldn’t recommend it for the first session as it cheats people of a chance to struggle.
Solution, I would offer mobbing as a option.

TDD

Test driven development is still very new for many people. I’m surprised by the number of pairs that don’t write any test even though it is stated as a constraint. I think you need sessions just to build this skill. Probably many sessions. In the retros it was common to hear they hadn’t written test.
Solution: Slow down on the addition of constraints. Talk about the testing cycle. Ask about the tests. Give people paper so they can write the test down before code. I would even make a place to post the paper to share.

Constraints

Constraints can be a great way to explore new areas of design. But too many remove all usefulness. This is used a lot in Code Retreat so I’m going to address them individually. But by the 4 session we were doing game of life with: no talking, no primitives across a border, and no if statements. We had not mastered any of these and most people didn’t actually follow the restrictions. It also massively discorages using a new language as it’s an advanced design session.
Suggestion: make constraints optional, give a variety of choices. Encourage not using them if you are doing this session in a new language. The new language is the constraint.
Constraint - Primitives: This is open to a lot of misinterpretation. It’s designed to address coding smell of primitive obsession. Which means you shouldn’t pass around numbers and strings, but objects that have a meaning. Like Money instead of a double, or Name instead of a string. However, there is an interesting issue of constructors: is the code “new Money(100)”‌ allowed? and there is a easy misinterpretation that the specific language definition of primitive is in play. This could just be an autoboxing exercises in java. Use Boolean instead of boolean, Integer instead of int.
Suggestion: have examples and a slide of the constraint.
Constraint - Muted Ping pong Pair Programming. This constraint removes communication in pair programming. It’s rather masochistic. I don’t understand why you would do this other than to show some of the pain, which can help learning, but contributes to the general issue of making the day less fun. Since these are usually saturdays, this should be the opposite of what you are striving for.
“The most important part of practice is making sure you want to show up for practice tomorrow”
Solution: remove this constraint entirely. At a minimum add it a possible minimum constraint

New languages

One of the things I‌ loved most about code retreat was trying new languages. This time that was hard to do because of the amount of other constraints that where in play.

Fun

I also noticed that we ended with less people than we started with. This is amplified by the fact that my partner and I‌ were the only 2 people that didn’t work for the company that hosted the code retreat and I live in a very programmer heavy area. Given the number of programmers in the bay area, there should be multiple code retreats and all of them should be overflowing.
I have had sooooo much fun at code retreats. Practice should be fun. I‌ feel that some of that has gotten lost in translation.
Solution: Bring back the focus on fun. Make it a stated objective.

Tuesday, April 2, 2019

The Problem with Hackathons

Today I went to TestBowl 2019 - Software Testing Competition.
I had some reservations going into it, but it was in my neighborhood and only 3 hours so I thought "why not".

It was disappointing, but the thing is, I've been to 40-50 practice events for programming. I've hosted quite a few of them myself. So why was this different?

This blog is an exploration into that question...

The Event

The event had about 20 to 30 people. I believe all of them were at the conference anyways (meaning I was the only local). The had dinner and did some questions, that were stated to be ice breakers but didn't include things like actually talking to other people. There was some interesting trivia questions though.
After that we broke into teams of 2 and where given a Test Target (a government website) and a Bug Tracking system (from the company that sponsored the event). This was a contest, so the points went to the most bugs submitted (this is an simplification, but basically true).

We used a mindmap to log as we went and only entered the results into the tracking system in the last 15 minutes.


We tested for around 90 minutes. Then it was over and I went home.

What was wrong

Testing

One very possible reason I didn't enjoy this very much is that it was just exploratory testing. While I admire exploratory testing, it doesn't bring me the joy that programming does. While I do not believe this to be the reason, I think it's important to acknowledge this bias in me as it could color my impressions.

Conversely, I do think it is this bias that is allowing me to see the issues in this event more clearly. It was lacking the "spoonful of sugar" making the distasteful parts more salient.

Contest considered Harmful

The single most harmful thing about this event to me was that it was a contest. The reason I enjoy, attend and participate in hands-on practice sessions is the learning. That's always where the value and focus is. The contest changes that focus and obscures the focus.

For example, there was the option to work alone. I think some of the other teams or even most of the other teams worked separately. This is much worse for learning, but it is understandable that people would prioritise that when "the goal" is number of bugs in 2 hours.

Let's imagine working alone. It's rather hard to see what you would learn if you don't have anyone to learn from. But even with just 2 people there is less chances to learn tricks you can use later. Also, there weren't constraints which are usually helpful when learning. Forcing you to work in a new way and discover new techniques.

Alternative Motivations  

It's obvious that some of the motivation for the hosts was to get people to try out their software. This isn't inherently bad, learning new tools is often useful. However, because contest was drawing attention away from learning we used the tool but without really learning it. I didn't come away with ways the tool could help me to test. This is something that could be addressed better if the focus was solely on learning.

No Retrospective

So another downside of the contest is there isn't much sharing between teams. There was sharing of the score, but this didn't pass along learning just increased focus on the scoring and task. The competitive aspect tends to prevent sharing and reflection . The goal being "to win" instead of 'to learn" also means that when it the work is over you are tempted to leave because it feels like the event is over. You don't stick around to learn because that wasn't the point.

No Space for Stretching

"Hard in training, easy in combat"
Another element that was missing was the space to try new things. Humans tend to either be in a state of practicing or preforming. Part of fitting into the time limit was prioritizing for getting everything done in the time limit. This left little space for actual practice.

In conclusion...

The reason I enjoyed the other practice sessions is the hands on learning. I hadn't realized the amount of effort that was put into making sure this was happening. 

Monday, December 3, 2018

Safeguarding: A step-by-step guide

By: Llewellyn Falco, Josh Widzer, Jay Bazuzi

TL;DR:
Bugs happen. While you could simply fix them, you could instead take an extra step to prevent similar mistakes from occurring again. This 25-minute process will do that.

We will go in to the philosophies and reasons to do this in other articles.
1. When to do it
Do safeguarding right after you’ve fixed a bug. The same day or next is good. This is when it’s fresh in your mind and when improving the system still feels relevant.

The key roles that need to be in the room are the people who:
  1. understand what happened and why
  2. wrote the bug
  3. detected the bug
  4. fixed the bug
  5. project manager (someone who can approve the time required to work on the fixes)
  6. might resist the proposed remediation

2. Root Cause Analysis (RCA)
We are going to gather impartial observations about what happened.

Create a Google Doc that everyone can access, with with the following tree:
  • What caused us to write the bug?
  • Why didn’t it get caught sooner?
  • What made it hard to fix?

Everybody is going to start adding nodes under these three headings. They will also add questions in response to the nodes. All of this is done at the same time by all attendees without any talking.

This section is timeboxed to 10 minutes.

3. Vote
We will do a version of dot voting. Everyone will vote on as many items as they want but no more than once per item. Voting is done by putting your initials at the front of any item.

Example:
  • What caused us to write the bug?
    • [JW, LF]  requirements were unclear
    • [LF] Name of a function lies about what it does.
  • Why didn’t it get caught right away?
    • No automated Tests
    • [JW] Hard to write automated tests for this section of the code
  • What caused debugging time / cost?
    • Logs were too verbose, so we didn’t see what was going wrong.
    • [JHB, JW] Hard to redeploy site

After voting, copy the top 3-4 items into a section labeled: Remedations
This section is timeboxed to 3 minutes.

4. Budget
Before we come up with solutions, start by asking: was the total impact of this bug small, medium, or large? Have everyone hold up 1, 2, or 3 fingers. Pick the most common answer. Then propose an initial time box:
  • Small = 1/2 person-day
  • Medium = 2 person-days
  • Large = person-sprint.
Since we execute solutions immediately, this is the point where we need the approval of the project manager.
Because we intend to do 3 solutions, each solution can only be ¼ of the total budget (to allow some slack). This means that each item will be budgeted to:
  • Small = 1 hour
  • Medium = 1/2 day
  • Large = 2.5 days

This section is timeboxed to 2 minutes.
5. Identify Remediations
Next we are going to brainstorm improvements to our system to reduce the chances of this issue happening again. Going back to the Google Doc, everyone will silently add ideas to the second section that we copied from the top brainstorming section.
Solutions that require extra discipline are bad solutions. We are looking for ways to make success easier.
Remember to keep in mind these are timeboxed solutions meant to improve our system and environment as opposed to solve everything. Often small improvements yield big returns and if the problem still persists, we will get another chance to do a safeguarding in the future.

Example

  • Remediations
    • Requirements were unclear
      • Bring users into our grooming sessions
      • Earlier usability tests
      • Do earlier demos
    • Hard to redeploy site
      • Write down checklist of deployment steps
      • Automate build/test/upload sequence
      • Get a second server to deploy blue/green
After 7 minutes, everyone votes again just as we did in the last section.
This section is timeboxed to 10 minutes [7 brainstorming, 3 voting].
6. Add items to task board and do them!
Because safeguarding already has time approved work on them immediately.
Nothing we’ve done so far matters if we don’t implement any of the solutions. Make sure that useful action comes from this exercise, immediately add them to the taskboard (budget has already been approved in step 4) and start working on them. Remember that these items are timeboxed and are not meant to completely prevent the problems in the future, but rather to lessen the chances.
Final Notes
It is important to remember that safeguarding is a skill. The first time you do it, be patient and give yourself extra time, maybe an hour. Also, remember to practice it regularly, usually once per week, as you will get better at doing the process and finding good remediations.

Special Thanks to Arlo Belshee for creating Safeguarding.

More at:




Tuesday, August 15, 2017

Mindmap retrospective

One common way I do a quick retrospective after exercises to "burn the fuel of experience" is a observation retro. I first learned to do these via post-it notes and then grouping them into clusters based on similarity. I still like and do this method a lot but find myself doing a mindmap version very often because of the ease and iterative nature of it. This is a short write up on how to do one.

Mindmap from Agile2017 session "The ROI of Learning Hour"

  1. Open a mind map
    (I use mindmup )
  2. Label middle (blue) node
  3. Collect observations from the audience 
  4. Add structure as needed

Collecting Observations


This is pretty simple. Ask for observations. When someone shouts them out add them to the mind map. It's ok to rephrase them, try to get them as short as possible (1-2 words) but if you can't add a whole sentence if needed. For example someone might say "the chart with urgent things getting in the way of important things" for which I would type up as "Important vs urgent"
Another side note is to ask for "observations" rather than "learnings". This might seem small but it can make a large difference to the amount of feedback you get. Learnings can be intimidating and makes it seem like there are right and wrong answers.

Adding Structure

Anytime I saw 2 or more concepts that had a similar base or extended an idea I would add that node and move around the map. I highlighted these examples in yellow above. This does a couple of things
  1. Calls out abstractions
  2. Triggers more observations
You might notice I also added "thresholds" even though there was only 1 idea under it. Or that I didn't add small changes over time, but did extend the ideas of 300 pushups, micro habits & change blindness to it. 

Abstractions also trigger variations. If we are looking at this blog post and someone points out the Labels, I could abstract it to fonts. In which case they might also point out the bold or normal fonts. But I could also abstract it to "Formatting' in which case I might get color (black, blue), Numeric lists, tabs, images and text justification. 
Either way more of the experiences is being inspected.

This process of adding structure to the observations is an interesting way of facilitating. Sort of reminds me of 'training from the back of the room' (although I am clearly at the front of the room during this)


Sunday, July 9, 2017

On Investing

A while back I started really looking into the math of compound interest. I even made a video about it. All of this got me thinking about my own financial investments. While I've always been good at saving I've never been that good about investing so I did a little reading and then ran an experiment: I took my money, divided it into thirds and tried out 3 investment ideas. I also setup a calendar alert for 1 year in the future to review the results. Today that calendar alert went off, here are the results.

Betterment

Betterment is a robot trader. The idea is to be like an index fund but a bit better. My results where the opposite. It was like an index fund but a bit worse. Still this isn't to say it was bad, just the that it always lagged a bit behind the index fund I bought.

Results: 11.4%


S&P 500 Index Fund

Vanguard's S&P 500 index was a solid choice. Like Betterment it also has extremely low fee's and consistently gave slightly better results.

While it seemed almost the same, I would like to point out that results vary with compounding and the 1.5% difference over 40 years would add up. Let's take $1,000 for 40 years
At 11.5% = $77,800
At 13% = $132,781
So 70% better over 40 years.


Results: 13%

Stocks

The remainder I split equally into 5 companies. This requires a fair amount of explanation, so I'll start with the basics and go into detail afterwards. The main takeaway here is that it's very volatile with swings as much as 10% in a given week. Compared to either betterment or vanguard this is a rather extreme change. I also feel there is just straight out a bunch of luck involved and that I might regret this at any moment. However, so far it's been the best investment of the three.

Results: 32.6%

Stocks - My rationale/rationalization

I had a fairly simple investment philosophy: Invest long term in companies with smart people doing smart things.

As such I  bought 5 companies:
Facebook - Impressed by developer culture, CI practices & the hiring of Kent Beck
Google - Impressed by 20% time, Go, Kubernetes, AI and culture continuously refined by Larry Page
Amazon - Impressed by microservices, continuous focus on market growth, and AWS
Netflix - Impressed by Devops, open source, pivots and team cultures.
Tesla  - Impressed by the products and CI in cars (actually know very little about the company inside)

I only bought once. Didn't do any day trading. Didn't do any financial investigation. This might seem a bit irresponsible, but my theory is that it's all a bit of gamble and I'm more likely to over value my understanding than gain real insight. I'm also not doing any market analyst of how the companies 'fit' into the bigger market. I'm simply trusting that smart people doing smart things is going to win.

I would also like to state that I think I might have just gotten lucky. I think it's easy to fall prey to survival bias and assume that success is somehow predestine.

The stocks make me a bit nervous, but I also realize that they have a much larger potential to generate real wealth; $1,000 for 40 years at 32.6% = $79,751,886

Monday, March 13, 2017

Why we did a speed meet at our conference and why you should too!


At European Testing Conference 2017, we had a full session devoted to a speed meet.

What is a speed meet? At its most basic it's talking to someone new for 5 minutes, then rotating and doing it again 9 times.

Here's what it looked like:


Mind Maps: What do you talk about?

Of course this raises the question of what to talk about? To solve this, we took a suggestion from  Jurgen Appelo and had everyone make a small mindmap about themselves. When you sat down you handed your map to the other person. Therre is a lot of information between the 2 mind maps and people would easily find something that they were interested in. And this is the rather amazing thing about geeks; 

5 minutes to make small talk is terrifying for geeks.
Given a topic they care about 5 hours isn't enough time

Here's an example of one of the participants mind maps:

Why do this?

Conferences are amazing places. It's a great opportunity to mix and talk with many people you wouldn't normally get a chance to interact with. However, if you are new to a conference this can be a overwhelming and terrifying prospect. While most people are friendly after you meet them, strangers never seem that way. We wanted to make it easier to have a good 'hallway track'. After talking to 9 people everyone had at found at least 1 person they liked.  The conference became a lot more friendly. We also heard more things like:

"Kara! Have you met Matt?"


Lunch

Lunch time can be especially uncomfortable if you don't know anyone at a conference. Finding a place when every table is full of strangers already talking? Often we can just try to find a place to hide away and eat quietly. This is why we did the speed met in the morning the first day of the conference. Lunch was right afterwards, and it was nice to know at least 1 person to eat with.
lunch should be friendly, not scary

Details:

Just do it

Structure and lack of choice is your friend here. Notice that while we normally had 3 tracks, we only had 1 during the speed meet. We didn't want to encourage people to skip it. We also spoke to the speakers to encourage them to participate. It can be a special treat for a newbie to get a chance to speak 1 on 1 with a presenter. 
We also didn't do it as an 'optional' morning session. These sessions usually have a very low percentage of the conference attending. For example, many conferences have a lean coffee morning session. But, for a 1000 person conference it isn't unusual to have 20-30 people at these. 


Homework

We gave multiple chances to create the mind maps beforehand 

  • Emailed the day before conference
  • Mentioned at Speakers dinner
  • Mentioned in opening slides for the conference
Nonetheless, there are still a bunch of people how put theirs together as the sessions started. That's ok, it's meant to be quick and easy. We provided lots of paper and pens.

Rotations

I highly suggest a few (4-5) practice rounds of moving 1 seat to the left. It's amazing that if you wait for the seat next to you to become empty ( X 150 people) this can take a few minutes to move people. If everyone stands, moves & sits it takes 3 seconds.  

Early

This sets the tone for the conference. Do it early, not at the end of the conference.



How did it work out?

Excellence! It can be hard to judge the effectiveness of an activity. We do a retrospective and we got many notes about liking it, but is liking it the same as it being good? Maybe they just remember it because it was different?
I had a bit of an advantage as this is the second year for ETC and we could compare it to last year. I also have all the other conference I attend to compare it with.
However, the biggest indicator for me was the party the first night. While it's hard to articulate, it just felt friendlier. People moved between tables more, talked more. The whole atmosphere felt warmer. 

10 / 10 Would repeat!