GreatVines Development Methodology
Axioms of Software Development
The GreatVines development methodology follows from a few high-level axioms, draws from multiple software development and project management methodologies, and makes use of modern tools for collaboration.
The axioms:
- Requirements Change
- Priorities Change
- Delivering Useful, Working Software Frequently Is Our Primary Goal
- Delivering Updates to our Software is Cheap
The first two items, changing requirements and priorities, are related, and mean that we don’t expect to be able to predict the future more than a few months out. We generally have a good sense of which new features customers want to make use of next month, but not those that will be most useful next year. Additionally, as we deliver new features to customers, requirements for enhancements to these features become apparent as we get feedback from them.
Item 3, delivering quality software, stands on its own. These first three items reflect what we think are the most important principles from the Agile Manifesto.
Item 4, that the delivery of updates is cheap, follows from the fact that we are a software-as-a-service company. The days of shrink-wrapped software are gone, and updates to both our web-based and mobile applications are delivered without end-user intervention. Because delivering updates is cheap and requirements change, we are able to deliver multiple iterations of new features, and adjust requirements as we discover exactly how customers are using our software and how they would like to use it.
Therefore, we aim to deliver useful, working software frequently under theassumptions that requirements and priorities change regularly, and that delivering updates to our software does not impose a large cost on either GreatVines or our customers.
Process
We generally organize releases into three- to four-week cycles, similar to Scrum sprints, but try to keep the next two to three releases planned as well so we’re looking two to three months down the road. Like scrum and kanban, we start from a backlog of features that we would like to implement. We bring together team members from development, support, implementation, and sales when prioritizing the tasks for future releases.
Each release typically has a combination of small tasks, which may already describe a combination of technical requirements and planned implementation details, and larger tasks, the requirements for which need to be further elaborated.
In preparing the bigger tasks for an upcoming release, we work to ensure the requirements are clearly defined so development can proceed. Features that require a new user interface or significant changes to an existing interface are mocked up. There are often a few iterations of mockups, as questions are raised and addressed, and the requirements and mockups updated.
Depending on the feature, we occasionally share the mockups with existing or potential customers, and solicit their input before starting development, but we generally prefer to implement a working interface, then adapt it as we get feedback from real use.
As the requirements are fleshed out, we break them down into development tasks, referencing the mockups and relevant requirements. Each task is estimated by the developer that will be responsible for writing the code.
The tasks within a release are prioritized so if there is any slippage, the highest priority tasks get completed first. The assignment of prioritized tasks to developers ensures that each developer knows which task they should be working on at any time, and helps collaboration by letting all members of the team know who is working on what and how the priorities are defined. As in kanban, it also limits the amount of work-in-progress.
During development, we try to ensure the quality of the software through developer-written tests and peer review. For both our web-based applications running on the Force.com platform, written in Apex, and our mobile app, written in javascript, we make extensive use of unit tests. Although we don’t follow a test-driven development process per se, we try to take a test-first approach with bugs; for new features, we typically write tests either in parallel with implementation code or subsequent to it.
We have found that testable code is better code. If it’s hard to test, it’s probably poorly designed, and needs to be decomposed.
During peer review, both the tests and implementation code are reviewed. We look for areas in which the design of the software can be improved as well as ensuring that the code follows our internal coding conventions to ease future maintenance.
With our mobile app, we have started writing automated functional tests as well as unit tests. The goal is to have the tests fully specify the functionality of the application.
When a new version has been released, our release notes are updated.
Tools
There are a couple important tools we use to support our development process. We use LiquidPlanner as our project management tool. Each release is organized as a package. We have our web-based application and mobile applications organized as separate projects, broken down into broad features using folders within the projects. We pull tasks from both projects into a release package.
Mockups for new interfaces that will be built as part of the release are done at wireframe level like one would do in Balsamiq Mockups. (Our mockups are actually often done as Google Drawings.)
The use of ranged estimates in LiquidPlanner allow us to organize releases with a fair amount of confidence in being able to hit delivery dates. Estimating software development tasks is a continuing challenge, but we find the confidence interval-based approach superior to point estimates often used in project management tools and story-point estimates used in scrum. In the backlog, we often put wide estimates on broadly defined tasks, when it’s not yet clear the value of the information provided by more granular requirements and estimates. When organizing a release, we break down big tasks into smaller ones, generally around one to six hours. Smaller tasks are easier to estimate more accurately, and estimates improve with practice.
LiquidPlanner also serves as a collaboration tool, minimizing project management overhead. As discussed above, the priority of tasks is unambiguous. We keep estimates of remaining work on tasks up to date, and LiquidPlanner automatically updates the schedule. Discussion about task details happens within LiquidPlanner so if there’s a change to or clarification of requirements, there’s one place to look. (When real-time discussion is required, we generally use Google Hangouts, then record the result of the conversation in LiquidPlanner.) Using LiquidPlanner mostly eliminates the need for “what’s the status?” discussions.
We also track the status of code reviews and whether the code for each task has been merged within LiquidPlanner.
We use git and GitHub for version control. Our use of github follows how most open source projects are organized. We have a GreatVines organization, which contains the primary repo, from which we package our software. Each developer has a fork of the repo. Developers create feature branches for individual LiquidPlanner tasks, and open a pull request against the greatvines repo when the code is ready to be reviewed. The pull request is noted in LiquidPlanner and the task is moved to a ready-to-review package, which puts the task on hold (not scheduled for further development).
Peer review is done within GitHub, using inline comments on the open pull request. If the task needs further work, the task is moved out of the ready-to-review folder in LiquidPlanner so it’s scheduled for additional work. When the pull request is merged, the task is marked done.
Future Improvements
There are a couple areas in which we are planning improvements to our development process around testing. We are using jasmine and jasmine-jquery to do some functional testing of our mobile app, but it’s not exhaustive, and we don’t have anything similar in place for our Visualforce interfaces. Therefore, we augment our automated testing with manual testing. We would like to add more robust automated functional/acceptance testing, perhaps using a browser automation tool like selenium or casperjs.
As we further automate testing, we also plan on introducing continuous integration. Travis CI looks promising here, with tight integration with GitHub, and extensive use among open source projects already.
There are a couple areas in which we might experiment with changes in process in the future. One is in choosing which features should go into each release. We currently use a consensus building approach in which we informally consider the value of possible enhancements to our customers. I’d like to investigate whether there might be gains to be had from the use of more formal Decision Analysis practices.
We might also experiment with scrum-style user stories for documenting requirements. For the most part, our lack of a formal requirements documentation structure has not been a problem, and we are able to turn requirements into technical designs easily. But in cases where requirements are very broadly defined, the “As a user, I want” structure may prove valuable. Using a standard structure may also ease on-boarding of new employees and coordination with any contract or outsource developers with which we work.