Interviews

Mastering The Metal Toad Project Life Cycle

We're in the middle of interviews for our open project manager position and we've been talking to a lot of great candidates, all with diverse project process backgrounds at various companies.


We're in the middle of interviews for our open project manager position and we've been talking to a lot of great candidates, all with diverse project process backgrounds at various companies. Not to toot our own horn too much, but one thing that I've realized as a result is just how great our project life cycle model is! We've made great strides in the last several years, and at this point we have it dialed-in for the type of projects that come through our door.

Without further ado, in the spirit of open-sourcing not only our software contributions but also our processes, here's what we do and why it works:

A Wild Project Lead Appears!

When a sales lead is vetted, our strategy team wrangles as many details from the potential client as possible, and also suggests a big range of new ideas to consider. This helps us figure out what on earth it is they want us to do and what problems technology can actually solve for them. Sometimes it's crystal clear documentation, feature lists, and designs or wireframes. Other times it's a 5 item bullet list scribbled on a napkin. Regardless, when our sales team says it's time to estimate, we jump to it!

Our production team tries to reduce the sales workload on developers as much as possible by turning the sales info into a line-item spreadsheet with relevant details to hand off to a developer. This leaves our developers to focus on the big unknowns and to research specific solutions, instead of counting templates or re-estimating things like server setup that happen on every project. To help with accuracy, we even have an ala carte menu of regularly estimated items that we can accurately predict, especially for Drupal 7 projects. Often an estimate spreadsheet makes it to our estimating developers needing only some thinking and API review or some test code.

Completed estimates make their way back to the sales team, along with any caveats, risks, and unknowns. Our developers almost always go with ranged low-high estimates, with low being the absolute best case estimate, and high being if everything blows up along the way. As a result, it's on the sales team to understand the overall risk of the project and determine which estimates to go with to close the sale. We send our proposals to clients with the detailed estimate spreadsheet so they can understand the in-depth look we take at their project even before having sold the work. We also provide a couple important caveats to clients:

  • Hey awesome potential client, the estimate and proposal are based on our best understanding of your needs and the project scope. The more info you can provide us pre-sale, the better we'll be able to estimate and avoid surprising you down the road. (I could write an entire post on this one… in fact, I probably will!)
  • Set aside a 10% change order budget on top of what you're committing to by signing the proposal. Yes, we're aware that you know exactly what you want, but you may not have managed to get all of it into our brains via text in the RFP yet, so it's completely possible that we missed something.
  • Oh, and that change order budget? It's also for the inevitable things you'll want to add along the way. Trust us, it's going to happen. We're good at managing scope for everyone's benefit, so expect us to push back with a "yes, we can do that, but it's not part of the project scope… yet" message. However, if you have some funds set aside, when you say "We want to add animated monkeys swinging from the site header and will use some change order budget to do so" we'll say "Hell yes monkeys!" and make it happen.

We often come in at the higher end of competitive bids for two reasons: 1) We provide a superior level of customer service and project management, and 2) we take the time to estimate down to minute details on a project and capture everything we possibly can in the initial estimate. We've lost more than one sale following an initial presentation and estimate only to have a client come back to us for help to save their project that went off the rails with a "cheaper" shop.

Shut Up and Take My Money!

We closed a sale. Woo hoo! Now comes more fun stuff. Usually things kick off with some discovery, wireframes, designs, and technical architecture planning. We like to make sure we're involved in the design process when possible, because while designers love to shoot for the moon, they're not always particularly scope-conscious or may not understand the complexities of development required for their designs. We can usually suggest simple design changes to save clients buckets of money (or in some cases, just reign things in to match the scope) without compromising the overall design aesthetic.

While designers are designing, we're spec writing. We produce a very robust tech spec that essentially says a) what we're going to build, b) how it's going to function, and c) the basics of how we're going to build it. This can't be completed until designs are final, because the designs inform it to a great degree. We don't touch development beyond perhaps a base site install until we have client approval on the tech spec; it's always cheaper to redo text in a Word document than it is to redo code.

Then comes a critical step - the project plan! Without this document, we're powerless. It's the single-most enabling document a Metal Toad project manager has in their arsenal. The PM works with the developer(s) assigned to the project to re-estimate the entire project now knowing all the details, which is often much more than what was known when we sold the project. The new estimates help us confirm several things:

  • Are we still in scope compared to what was in the RFP and our proposal? If not, where did all these new features come from and how do we deal with them? Can we swap some lower-priority tasks out for future-phase development or fit them in at the end of the project if there's budget left and we can beat our estimated? Do we provide a contract addendum? If we're under the original estimate, sweet! Let's see where the project ends up and hopefully we can credit some time to the client for additional work.
  • Do we have the right development resources on the project? We need to make sure developer skills match the estimates, since much can change from frontend to backend during the creative phases. Or, if required skillsets changed since the proposal, we can make a decision internally to eat the overage or reassign the project to a different resource.
  • Did we screw up anything in the initial sales estimation process? If so, what's the solution?

Once the project plan is reconciled, we're good to go. Project managers create tickets relating to each specific task for development, tie those tickets to the project plan estimates, and track numbers like hawks throughout the course of development to project the potential over/under on specific tasks and on the project as a whole.

That Development Stuff

Then we do the actual development. There's some voodoo magic that happens in our developer pit and a website is born. Bam! Easy. We turn around projects fast. To give you a reference point, we are somewhere between a snake and a mongoose… and a panther.

Q to the A

"Ewwww… quality assurance is kind of boring and tedious!" WRONG. It's incredibly boring and tedious. But it's crucial. We've improved a lot in this arena process-wise, and there's always further to go with improvements when it comes to QA process. We have a thorough process and testing plan that results in three rounds of multi-dimensional QA.

The test plan dimensions are:

  • Browsers/Devices - We test across all supported browsers and devices, and we test as many of the unsupported ones as possible too! We don't want to be caught with our pants down when a client's CEO looks at a site in IE6 and declares it totally broken. Yes, it's totally broken, but we want to know that's it's broken before they do.
  • Pages/Templates/Layouts - We test all unique pages/layouts on the site to make sure they are functioning properly and are styled correctly. We also check default styles for the site so that future pages created will also look as expected.
  • Content and Images - We test all pages and content types for image dimension enforcements by uploading huge images, small images, wide images, narrow images, zombie images, etc. We test content in the same way - short articles, long articles, list pages with no content, list pages with tons of content, site blocks with no content, site blocks with lots of content, etc.
  • Code - We do code review to check for proper coding practices and code integrity. Senior devs review junior dev code. Junior devs review senior code. It's all good as long as we're improving coding skills at an organizational level.

The QA Rounds are:

  • Alpha - The development is feature-complete, but untested. Time for alpha QA, usually by the developers who worked on the project. The get to hacking their own work apart and once they complete self-QA, it's on to the next round.
  • Beta - We have arrived at a truly completed website. The developers have pronounced it done (but not like dinner), so here's where we go about knocking them down a bit by poking it some more until it breaks. Usually the project managers are heavily involved in beta QA. We've been with the project from the start and hopefully know the client's needs as well as they do, or in some cases even better.
  • Client QA - After beta we've ideally resolved all major bugs and feel comfortable turning the site over to the client for review. Tight project schedules often result in overlapping beta and client QA, but we can always dream of handing off a perfect site. Clients will report "bugs" back to us, which are often a combination of legitimate bugs and feature requests for things outside the project scope. This is exacerbated when new stakeholders on the client-side that haven't been a part of the project from the start suddenly want to have a say in things. The good news is that by now we know roughly where the final project budget will stand, and we're already talking phase 2 and what comes after launch with the client, so it makes for easy creation of a phase 2 feature list.

Considering that QA can be never-ending and there is always another bug to be found if you look hard enough, we don't cross-test every test plan dimension over every round of QA. Instead, our goal is to get through All pages, content, and images in all browsers by the time we get through all rounds of QA. If we catch regressions along the way, we go back and retest for those specific regressions across browsers/devices. Just like our development, QA is all about progressive improvement. Once a dropdown menu works in one browser we'll test it in the rest, but it'll never be an efficient use of time to confirm something is broken in 18 different browsers.

Go Time

It's time to launch. Deploy! Deploy! Deploy! We push the big red launch button, and either the site is live or things have blown up. No sweat!

So How Bad Did We Eff Up? (AKA: What Did We Learn?)

Well, we got a site launched, so it can't have been THAT bad. Regardless, we schedule a retrospective for every project so that hopefully we can learn something from the project. Common retrospective discussion surrounds:

  • What went well? Is it part of our process or as a result of our process? If not, should we institutionalize it?
  • What went poorly? Whose fault is it? Lots of finger pointing and yelling… OR NOT. Everyone at Metal Toad is skilled but humble, and owning up to mistakes is something that's well-received, as long as you learned something and are able to improve in the future as a result. We want to learn where our processes fell short, broke down, or didn't exist in the first place.
  • Is the client happy? Why/why not? What can we do about that?
  • How does the budget look? Did we manage to make money?
  • How accurate were our estimates (both sales and project plan)? Which specific tasks were mis-estimated and why? How can we better estimate similar work in the future?

Once the project is complete, we also grade ourselves (both project managers and developers) individually across a variety of metrics including client satisfaction, budget, scope & timeline management, responsiveness to clients and effective communication, project quality, code quality, estimation, and more. We self-grade on those metrics and then other internal project stakeholders also grade us. The idea is that the two grades should roughly align, and if not a discussion is had around where perceptions differ and why.

This all comes together to form a complete picture of where we can improve on both a personal and company level. It also results in kudos to the team for a job well done and areas where individuals excelled. Then it's on to the next project, which should flow even more seamlessly than the last, because we're always trying to get better!

Impressive, Right?

So there you go. Nothing totally revolutionary, but the subtle details and a great deal of team experience are the crucial parts. 60 percent of the time it works every time. Your milage may vary, especially if you don't add our Metal Toad secret sauce to the equation. Now go give it a try!

Similar posts

Get notified on new marketing insights

Be the first to know about new B2B SaaS Marketing insights to build or refine your marketing function with the tools and knowledge of today’s industry.