Two Common Mistakes In Understanding Sprint
And a Helpful Mental Model to Get It Right
About the Author

Kiryl Baranoshnik is an experienced Agile and Lean practitioner. He works as an Agile Coach at a large bank in the Netherlands and shares his knowledge publicly as an Agile trainer.

There are two mistakes in understanding the concept of sprint (or Agile iteration) that I see quite often. Wherever they appear, these two mistakes completely negate the benefits that the sprint brings. They reduce transparency and predictability of delivery and add unnecessary procedural overhead. Those two common mistakes are:

  1. Making a sprint longer. This mistake is typical in situations when a team is falling behind the schedule but still wants to complete the entire planned scope before declaring the end of a sprint.
  2. Cloning (digital) cards or tickets from one sprint to another. This happens whenever a team has incomplete work items at the end of a sprint but clones them into subsequent sprints instead of carrying over. The original items get closed as if they were done.
Over the years I've employed a mental model that helps me combat these mistakes and I'd like to share it with you.

The concept of a timebox can be traced back as far as to the late 80's. As it had been adopted by Extreme Programming (under the name of 'iteration') and Scrum (as 'sprint') it gained even more popularity and has become the essential part of applied Agile [1]. The power of this practice lies in the idea that instead of fixating on scope a team must focus on delivering as much value within a limited amount of time as possible. However, this essential meaning often gets overlooked. The very name of the concept — a timebox — hints at how you should treat it: as a time span and as a bucket (a 'box') at the same time. Let's explore how these two aspects get misinterpreted and what thinking tool can help you to avoid misinterpretation.
Mistake: Making Sprint Longer to Accommodate to Scope
A team has planned 10 user stories for a two weeks sprint. On the last day, 9 of 10 are done, and the team is pretty certain the remaining user story should take about 2 days to complete. The team insists on extending the sprint. They eventually finish the story in 4 days, which results in the actual sprint duration of 14 days. The same keeps happening in the following sprints too.
When a team is slipping on schedule, the intuitive response is to make the sprint longer.
Schedule slippages are quite a usual thing in software development. To a point that they may be considered inherent and unavoidable. When a slippage occurs, people have the intuitive desire to extend the sprint in order to deliver the full planned scope. A team — or team's management, or the product owner — think that completing full scope is what defines the end of an iteration.

This mistake has its roots in traditional management that revolves around scope commitments as the primary tool for execution control. As soon as the scope for a project is defined and estimated, the commitment to delivering the scope is formed. The key focus now is fulfilling the commitment, i.e. delivering the scope before the estimated deadline, and if it's not possible then the easiest way to fix this is to adjust the schedule. I'm oversimplifying here, of course, as there are still options for de-scoping and playing with the team size. Nonetheless, in this paradigm completing full scope is considered the baseline to adjust from. And that's exactly what the concept of sprint (aka iteration or timebox) tries to turn upside down. This fact is aptly illustrated in the well-know diagram:
In traditional software developed the scope is usually predefined and protected (fixed) while time, cost, and sometimes quality as a consequence are negotiable (variable) as circumstances change. In Agile time, cost, and quality are immutable instead and adjusting scope becomes the main leverage for reaching success. Adapted from DSDM Philosophy and Fundamentals [2].
The adverse effects of changing iteration length at will are quite dramatic.

First, the whole process turns into chaos and sways the team off pace. The activities that are supposed to be regular and automatic now require case-by-case scheduling. This creates unnecessary procedural overhead. The team quickly gets tired of this and slips into ad-hoc process and ad-hoc in reality means no process at all.

Second, any kind of planning based on historical data becomes over-complicated and unreliable. One of the most popular metrics to generate historical data in Agile teams is velocity. Just like in physics it is a measure of distance covered in a unit of time, in software development it is the amount of work completed in a unit of time (the definition of 'amount of work' may vary from team to team). Now imagine your unit of time is variable. The very definition of velocity doesn't make sense any more.
Think Instead: Sprint is a Unit of Time
Instead of thinking how long it will take to complete a certain scope, try to think how much can you complete within a certain time span.
Let's take an astronomical hour as an example. You can't change the duration of an hour —an hour is an hour by definition. Sprint should be regarded in the same way. Consider this analogy from your everyday life: if you need to mow a lawn but have only managed to do half of it in an hour, that's your result, no more and no less. You may still keep on mowing but all additional work will be happening during the next hour. You don't extend the hour. What you do instead is adjust your plans and priorities for the next hour and think of what were the reasons that you hadn't nailed it in your first attempt. You also get an understanding of your velocity: 0.5 lawns/hour. That's exactly what a sprint is. It is the unit of time, not the amount of work you plan for it.
Mistake: Cloning Incomplete Work Items Instead of Carrying Them Over
A team has completed a sprint but "User Story X" is still not finished. The team wants to visualize the progress they've made on this story. On their task board they have a card for it. They rename it to "User Story X Part 1" and move it to Done. They also create a card named "User Story X Part 2" and put it in the next sprint. They don't manage to finish the story in the next sprint either. So they create "User Story X Part 3" and now move "Part 2" to Done. This exercise repeats itself a few more times.
A team wants to make it explicit that they've completed a part of the job, so they clone the original item, often multiple times for several consequent sprints.
This one I've seen quite a lot, usually with digital task trackers (the much-hated Jira being one of them, which is quite surprising because Jira natively supports the "correct" way of carrying items over) — probably because the operation of copying is so cheap there. Whenever a team is not able to complete a story, they clone it to the next iteration. The original story becomes "part N" and gets closed and the clone is now "part N+1".

There's a number of possible reasons why this happens. The most obvious is probably the team's desire to reward themselves for performing work, otherwise, they are left with the feeling of unfairness. Another typical reason is cargo-cutting the golden rule of fitting a user story in a single sprint. I've seen both in my career. Instead of focusing on the root cause, a team simply makes up an artificial completed item.

Such a way of tracking work has numerous issues.

First of all, it never actually shows real progress. Whether a user story is done or not can only be evaluated with the ultimate objective criterion — is it live or not. In Scrum, you can strengthen (or weaken) this criterion by the means of Definition of Done. This aligns with one of the Agile principles: working software is the primary measure of progress [3]. By cutting a user story into parts like shown in the example you are doing something as nonsensical as saying, "I've mowed half of the lawn, although I still don't know how big it is". How do you know then it's a half and not a quarter?

Second, such tracking removes the incentive of slicing user stories vertically into smaller pieces. There's no pressure to have a story that you can truly complete in one sprint because you have a 100% guarantee to have a completed item at the end of each and every sprint.

Then, if you use a digital tool for tracking, it messes up with your data. There's no easy way to calculate true lead time or true velocity when you keep adding entities for the same piece of work. The data on the items doesn't show true numbers and if you want true numbers, you need to aggregate them from multiple different sources.

And lastly, continuously cloning items simply wastes your time on unnecessary activities. I've seen teams wasting the entire sprint review on just creating new clones in TFS for the next sprint.
Think Instead: Sprint is a Bucket
Moving the work item's token sets the right incentive and makes progress transparent.
Instead, think of a sprint as of a bucket and work items as of stones. You can fit only so many stones in a bucket. Completing a work item means removing a stone from the bucket. At the end of a sprint you just pour off all the stones from the old bucket to a new bucket. The same happens to the work items: they just get carried over to the next sprint's backlog without change (and without re-estimation, as you may have guessed [4]).
References
[1] Agile Alliance. (n.d.). Timebox. [online] Available at: https://www.agilealliance.org/glossary/timebox/.
[2] Agile Business Consortium. (2014). The DSDM Agile Project Framework (2014 Onwards). Phylosophy and Fundamentals. [online] Available at: https://www.agilebusiness.org/content/philosophy-and-fundamentals.
[3] Agile Manifesto. (2001). Principles behind the Agile Manifesto. [online] Available at: http://agilemanifesto.org/principles.html.
[4] Cohn, M. (2007). To Re-estimate or not; that is the question. [online] Mountain Goat Software. Available at: https://www.mountaingoatsoftware.com/blog/to-re-estimate-or-not-that-is-the-question.