Velocity Tracking
Last updated
Last updated
Velocity tracking is a part of agile development I often see skipped. I'm not quite sure why. I find it to be an incredibly valuable tool for the little amount of effort it takes. I also caution that it's a rather dangerous tool that can turn an effective engineering team into a factory production line. As Peppy says, "Use [velocity] wisely."
Velocity tracking is the practice of measuring how much work (story points) was completed (by the team and individually) over time (an iteration). You then use these measurements 1) to catch concerns that impacted the iteration and 2) to approximate how much work will be done next iteration.
I'll start with the method for tracking, calculating, and charting velocity, then move into the analysis.
As mentioned previously, I recommend using a spreadsheet to keep track of velocity: it's simple, extensible, and no-frills. I made it even simpler by giving you a Google Sheet template to copy. In the sheet, I use five example iterations demonstrating members joining and leaving the team. On the first iteration (1/24/2022), the team consists of Bob, Susan, and Karen. On the second iteration (1/31/2022), Phil joins the team. On the fourth iteration (2/14/2022) Susan leaves the team.
I would highly recommend you review each of the formulas to understand the "magic" so you can know what to do to customize it or fix it if things get messed up. If things get too messed up, you can always refer back to the template.
K1 ($K$1
) contains the number of weeks in your iteration. We multiply that by the number of workdays in a week (5) and assume that a typical engineer will only be able to devote 80% (0.8
) of their time to iteration tasks.
This formula is only on the first iteration because we don't know what the expected velocity should be since the expected velocity is usually that team member's previous actual velocity. You should also use this formula when a new member is added to your team, which I've shown on B9.
There are a lot of colors on the sheet. Some of it (like the colors set in row 1) are just there to highlight some important setup items. The colors in columns B, D, and E are conditionally formatted, meaning they change color based on some formula. You should understand these formulas just like the cell formulas listed above. To see these formulas, in the Google Sheets menu bar, click on Format then Conditional formatting. Click on any cell that has conditionally formatted colors and the right-side drawer should display the different rules.
All the rules apply to rows 3 through 1001. This is because 3 is where our non-header rows start, and 1001 is are far down as the sheet goes.
Many columns have a rule of =TRUE=TRUE
at the bottom of all the other rules. This is simply an else
rule to say, "if no other rule applies, set the cell to this color". It's a way to ensure the color formatting applies even if a new row is added or a row is accidentally formatted.
If the cell is empty, it'll appear red to indicate it shouldn't be empty.
Otherwise, it's grey, letting you know you don't need to directly input anything into this cell (it's a calculated cell).
Start by changing the hyperlink on Task Board (A1) to your actual task board.
Change the Iteration length (K1) to the number of weeks in each of your iterations.
The first iteration is different than the others since there are no previous velocity to use for the expected velocity (column B). The second iteration is the pattern you'll copy for all future iterations. All the other iterations in the template are purely used as an example.
Rename Bob, Susan, and Karen to be your actual team members for the first iteration only.
If you have fewer team members than three, delete the rows (right-click on a cell in the row and click Delete row). You'll see several cells in columns B and D have a #REF, which you can easily fix.
If you have more team members than four, add a new row (right-click on a cell below the row you want to add and click + Insert 1 row above). In column A, add the member's name. Copy columns B through E (make sure to click and drag to copy them all together as column C is a hidden column, but still necessary to copy) and paste them into the new cell. The paste should automatically update the formulas to match what is needed for that cell. Repeat until you've added rows for all your additional teammates.
Update the Iteration Start Date to match your first iteration's start date.
Repeat the above steps for the next iteration, which contains a different formula in column B than the first iteration's formula for calculating the expected velocity.
Delete all other iterations' rows.
If you ever delete a row, you'll notice that the rows below it have #REF errors in columns B and D.
This is because the formulas for these rows were dependent on the row you delete. This cascades all the way down the sheet. It's also quite simple to fix:
Click and drag to select columns B through D of the row above the first #REF error. Copy those cells.
Click and drag to select columns B through D of the row that has the first #REF error. Hold down shift and Command ⌘ (on a Mac computer) or shift and Ctrl (on a Windows) then press the down arrow on your keyboard. This will select all the cells in those columns to the bottom of the screen.
With all the appropriate cells selected, paste what you had copied. This should resolve the errors.
An iteration off day is a way to account for interruptions to the iteration that shouldn't impact the velocity. These can be many different events, such as PTO days, holidays, hackathons, company off-sites/activities, major release days, etc. They shouldn't be recurring regular interruptions. Anything part of the regular expectations for an employee that takes away from their iteration tasks should impact their velocity. This isn't a punishment. It is a way to ensure realistic expected velocities for the next iteration. In short, if you feel that the current iteration's velocity –for one reason or another– will not be a good reflection of the next iteration's velocity for a team member, you can buffer it using this field.
To use the column, simply put in the number of days (not story points: days) that were interrupted. It's best to keep this as a whole number, but is not necessary.
You should now be ready to use the spreadsheet for your iteration planning. Let's talk about how to do that.
During each iteration planning meeting, you'll need to add new rows to the sheet for each team member for the iteration you are reviewing.
Click and drag to select columns A through E of the last iteration's rows. Copy these cells.
Paste into column A of the empty row right after the last filled in row. All formulas should automatically increment as needed.
Double-click on the Iteration Start Date of the first newly copied row and select the date that is this new iteration's start date. Copy and paste this value in all new rows.
Next, you'll need to update Task 1 through Task 6 and Iteration Off Days based on what is updated in your task board during the iteration planning meeting.
Start with a specific team member's new row for the iteration.
When a task has its completion history updated, you should update Task 1's cell with the story points fulfilled. To do this, multiply the story points of the task by the completion percentage delta between this iteration and the last. Repeat for each task the team member has worked on since the last iteration, updating Task 2 onward.
Example 1: say Bob has a task that has 4 story points. The task previously had a completion history of 10->40
and Bob said he is now 90% complete on the task, so the completion history has been updated to 10->40->90
. The delta is 50% (90-40=50
). Task 1 should have the formula =4*0.5
. In other words, Bob completed 2 story points worth of the task this iteration.
Example 2: say Bob has another task that has 3 story points. He just started on it this sprint and he finished it. The completion history is set to 100
. The delta is 100% (100-0=100
). Task 2 should have the formula =3*1.0
, or for simplicity just input 3
.
With all the busy work out of the way, it's time to reap the rewards. On the bottom of the Google sheet there are a Team Velocity sheet and one Velocity sheet per team member. These sheets should already be set up to handle the example team members and new iterations as they are added.
Each sheet has a Velocity Per Iteration chart, which indicates how many story points were completed for any given iteration. The Team Velocity sheet also has an Average Team Member Velocity Per Iteration which is the average story points completed by the team members for any given iteration.
There are no "good" or "bad" numbers. Instead, use the charts to look for trends. If you see a slow increase or decrease, what do you think is causing that? If you see a sudden drop or spike, why is that? There is also a 4-iteration moving average plotline (an average of the previous 4 iterations), which doesn't have enough data in the example to be useful, but does help find variance after many other iterations are added.
You'll note that the example team member velocity charts are tied to the example team members. Easy to fix. Double click on any one of the sheet tabs to rename it appropriately (e.g. rename "Bob's Velocity" to "<name of your team member>
's Velocity"). Then in B1 of that tab, change the name to the team member you want it to track. If you have more than three team members, simply right-click on any of the existing sheet tabs and Duplicate to make another copy. If you have fewer than three, then right-click and Delete the tabs you don't need. Done!
It'll happen. Almost every iteration for at least one engineer, it'll happen. Someone is going to believe the amount of effort they invested that week is not conveyed in the velocity. "That task took way longer than I thought it would," or, "I had to backtrack halfway through the task because I realized I was going the wrong direction", or, "I spent all week just trying to get my code past the reviewer," or some other rationale for why they are unhappy with their resulting velocity.
Avoid the urge to simply adjust the actual velocity numbers to match the person's desired results. Otherwise, you're completely defeating the point of the data and learnings from velocity tracking and you might as well not use it. Instead, you should consider these moments in the context of the iteration, in the definition of "velocity", and in how we plan for tasks. Here are a few examples of how you can "fix" disagreeable velocities without breaking the system:
"The task was over/underestimated." Check the cases below first, but if it's truly a matter of poor estimation, then easy-peasy, update the amount of Dev Days/Story Points on the task and update your task's completed story points in the Velocity sheet with the new estimate in mind.
"I found a bug that I fixed while working on the task." This happens quite often and you don't want to discourage your engineers from fixing broken code as it's found. Even if they already fixed the bug, go ahead and create a task for the bug, outlining what the issue was. It's important to track bugs in their own tickets. Add that bug to the current iteration and fill out its completion history and story point estimate retroactively, then update the velocity sheet with the extra work the engineer put into resolving it.
"I refactored some old code while working on the task." The codebase is never going to be perfectly clean, and although you want to encourage clean code, you also want to discourage unplanned refactoring of already released code. No matter how lean and clean your code is, altering old code can have dangerous side effects and should be appropriately accounted for. Discuss as a team if the refactor was well warranted. My recommendation is that an unexpected refactor should only be performed when the refactor fixes a bug, when it significantly improves performance, when it addresses a security concern, or when the change was necessary to cleanly fulfill the given task. If it was warranted, then either consider the original task to be underestimated or create a new task for the refactor as already explained above. If it wasn't warranted, then the extra time spent on the refactor halted the planned work of the individual and should impact their velocity.
"There was scope added to the task." Feature creep should generally not be encouraged. Much like the refactor situation, if the feature wasn't accounted for in UX or technical designs or if it was included as part of your acceptance criteria or business/user cases then it may lead to more issues than good. That said, things are always caught during the implementation phase as that's where you go from theoretical to practical. Changes in scope should always be discussed as a team or at least with leadership. They should also either be separated out into new tasks or in some circumstances just increase the estimate of the task.
"I started with one solution, realized it wasn't going to work and had to pivot." Let it impact the velocity. For new employees, a new coding domain, or inexperienced engineers, this is the price of ramping up on the project. A dip in velocity shows that there's struggle and learning happening, which is good! Don't let it go too far though. Some members may feel that their own trudging through the problem is a sign of grit when it may actually be stubbornness or pride. Encourage pair programming or rubber-ducking. Collaboration is just as important to build up and unify your team as it is for preventing incorrect solutions.
"This task is slowing me down because it's so draining." Back when I was an engineer, there were a few quarters that we had some really painful tasks to get through and we actually switched from estimating their story points and instead estimated their level of pain, which helped up empathize with team members who had to deal with daunting tasks. Some people deal with stress differently than others. My wife and I frequently use the spoons metaphor ("I had to deal with X, and it took all my spoons for the day.") as a means for her to express that something has drained all her energy for the day. Some tasks are monsters and even though you spend only a couple of hours on them, you are done for the rest of the day. I highly recommend calling out such tasks as early as possible and giving them some extra story points to account for them. If you are seeing more than one of these per iteration, I'd highly recommend you uncover what's causing them as it's a sign that your code environment is not conducive to happy employees.
"There were a lot of changes I had to make when submitting my code for review." You should account for about 10% of a task's time for PR/MR/code review. Sometimes, tasks take longer than 10% to get deployed. I recommend letting this slowdown show in your velocity. You don't want your review process to be continually bogged down by excuses. Instead, spend the appropriate time to fix what is slowing down the process.
Perhaps your deployment process is time-consuming. Create a task (or user story) to fix this! Your deployment process should be trivial and automated.
Perhaps the reviewers are making a lot of suggestions regarding code cleanliness. That's the job of a linter. If you're not using one, create a task to get one set up and ran as part of your automated deployment checks. If the one you're using isn't sufficient then either fix it so it is or stop requesting those changes as part of code reviews. It's inexcusable to have a coding standard that cannot be codified into a linter as it will not scale with your team.
Perhaps the code submitted was just too much and too complex. Encourage the developer to break up their code into smaller chunks and to submit more regular code reviews. It's likely that they spent too long in one area before getting feedback and have to do some unexpected backtracking. If the code is too tangled to submit smaller code reviews, then you should read up on how to write clean and lean code.
Now that you have a handle on how to use velocity tracking, let's step back a bit and discuss the implications of it. There are a lot of different opinions about velocity out there. I also happen to have a lot of opinions about those opinions. For example, a lot of people recommend not tracking individual team members' velocity, noting that velocity is about the "team's effort". I think you miss an incredibly important data point that could help the team and that team member if you don't track individuals' velocity.
My greatest concern with velocity tracking–which I think is the common source of hesitation–is that velocity can easily be abused to compare engineers to their teammates, to compare engineers to their historical selves, and to compare engineers to your expectations. It is so easy to become disillusioned into believing that higher velocities mean better performance.
If you see a significant decrease or increase in velocity, it's important to ask why. It could be that your developer is dealing with an overly complex and frustrating issue that you otherwise wouldn't be aware of. It could be that the team isn't good at estimating certain types of tasks. It could be that a task wasn't appropriately scoped. Use the data to instigate critical discussions.
Do not use a drop in velocity as a sign for the engineer(s) to pick up the pace. The reason for the drop is always more complicated than laziness. Different engineers have different velocities. Lower velocities can be because the engineer is working on more complex tasks, underestimating story points, or has a lack of experience with appropriate scoping and planning of requirements. Additionally, many of your most helpful engineers tend to do a lot of glue work, which isn't accounted for in the velocity.
If you feel there are potentially personal matters that explain the drop, talk with the individual privately and be clear that you're there to listen and help instead of to criticize. Velocity should not be tied to performance or professional development goals. Smart engineers will stop using it as an effective tool and start using it to maximize their perceived value.
Then you've got a management problem that should be addressed immediately and appropriately. First, you should acknowledge that there's a solid chance that you're wrong. "Lazy" is likely the wrong word. The situation is probably a lot more complex than that. You're going to want to privately talk with that individual to discuss what you are observing and to listen closely as to why they believe it is that way. You'll probably want to read Radical Candor prior to the conversation to understand how to keep it effective rather than defensive. At the very least, read about what could be causing a poor growth trajectory.
If you're still not convinced that there are underlying reasons that you can help your team member address after talking with them, or if you are convinced that their justifications are misleading excuses, then I recommend working on a performance improvement plan (PIP) with them, which is the first step on a road to employment termination. Once you start down this road, know that most non-top-performer employees will take it as a defeat and will likely not recover, as a PIP is less about helping an employee improve and more as a means to document that their termination was due to consistently poor performance.
Whatever you do, do not equate low velocity to laziness or poor performance. You should have other indicators other than a low velocity that lead you to such a conclusion.
One final caveat on velocity tracking. You may see a subtle but steady decrease in overall velocity. There are many possible reasons for this, but it may be your best indicator that you need to raise a red flag and reevaluate your software practices.
The strongest advice I can give you, however, is to adopt the full Circle of Life, including, and most especially, the technical practices. Far too many teams have adopted the outer business ring and have found themselves in the trap that Martin Fowler has labeled “Flaccid Scrum.” The symptoms of this disease are a slow decline from high productivity in the early days of a project to very low productivity as the project wears on. The reason for this loss of productivity is the corruption and degradation of the code itself. It turns out that the business practices of Agile are a highly efficient way to make a very large mess. Furthermore, if you don’t take care of the cleanliness of the structure you are building, then the mess will slow you way down.
Robert Martin (Uncle Bob). Clean Agile.