DevOps QA: The Many Hats of a Quality Assurance Engineer
Part 2 in a series on the different roles I end up carrying out as a Quality Assurance Engineer. This segment: DevOps QA. What that means, how I got into it, and what I've seen become successful.
Note: This is the second post in a series about the different roles I end up carrying out as a Quality Assurance Engineer. You can check out the first post here, where I talk about wearing my Tester hat!
So, what does my DevOps role look like? Let’s start with defining it! There are many lengthier explanations, but this is my TL;DR version: DevOps is a set of practices and processes used to promote collaboration between different roles in order to create sustainable methods for creating, deploying, and maintaining code. (Small point here: it’s technically not a role, even though the term is commonly used that way, e.g. DevOps Engineer).
I tried on the DevOps hat not too long after being hired as a QA Engineer. Getting our projects and teams set up with Continuous Integration / Continuous Deployment (CI / CD) was a big priority for Dylan, our Director of Engineering. Dylan took a few days “off”, and spent the time head’s down setting up a server on Bamboo, which is Atlassian’s product for CI / CD. Once the setup was complete, we had a swarm day where several of us got together and configured all of our current repos for building and deploying via Bamboo.
In addition to being part of the initial setup, I also created documentation for setting up and maintaining our project plans in Bamboo. The UI and process tend to be somewhat unintuitive, and the documentation has allowed other people to successfully set up new projects with relative ease. I was also part of the team that upgraded our version of Bamboo to a major release (including the long-awaited feature of config through code!). With CI / CD in place, we have a build check per commit, so issues have the potential to be found way ahead of the merge or deploy step. It also allows us to more easily troubleshoot when something does go wrong during deploys, and to roll back a failed deploy if needed.
Metal Toad implemented Agile development teams right about the time I started as a QA Engineer, and establishing a standard development workflow for teams was one of the first improvements I made. Before I started, there was no formal workflow for quality assurance steps. Testing was done on a per-project basis - usually concentrating on the high-profile (read: on fire) projects. Code review was done maybe once a week, using a tool that allowed us to look back through our history of commits per project. It was tedious, and offered a very shallow review opportunity. These reviews also never resulted in code improvements - we didn’t regularly use pull requests (PRs) for merging in work, so these sessions were really for discussion only. Within the team, working in sprints, there was no way to tell when something was ready for a QA check, or for code review. We added pull requests to our process, but the engineers communicated review readiness through Slack messaging. It was disorganized, but it also meant that trying to follow best practices through QA and code review was stressful.
The solution was actually a pretty simple one - creating a workflow for our sprint boards in Jira (which is actually quite time-consuming, but not overly complex once you get used to how Jira works). I set up a column for each ticket status - To Do, In Progress, QA, Code Review, PO Review, Done. I led discussions on our team about how to use this workflow day-to-day, and we established a process around when to move tickets between columns, and who was accountable for each column. We agreed that comments in Jira were our source of truth - conversations in person, Slack, or email had to be distilled onto tickets to ensure a consistent shared understanding. We also integrated GitHub into Jira - Git commits and PR titles include the Jira ticket number, so commits, PRs, and deploys are reflected on each ticket. This workflow allows every member of the team to easily see the status of work and understand their responsibilities in getting the work to Done.
More recently, I helped spearhead our move to centralized static code analysis. I worked with one of our mobile engineers to research various tools and create a decision matrix. We considered cost, run environment, inline comments, language support, and documentation. Our preference was for the analysis to be run through GitHub, to cut down on the sweat and tears of installing a tool on the Bamboo server. We also strongly preferred inline comments - that is, analysis feedback directly on the PR, instead of being forced to sign into a separate UI. Language support differences were negligible, except for one tool that excluded CSS / SASS analysis. Documentation in tech is always hit or miss, but most of the tools we looked at had decent enough docs for our purposes. Of all the tools we researched, Codacy offered the best options across the various categories. As of this post, we’ve got about 20 current projects set up with static code analysis, basic internal documentation for use and feedback, and another check in our workflow for adhering to best practices in our code.
Practices and processes that foster collaboration and communication. DevOps. All of these workflows and tools that I’ve implemented are with that in mind - to reduce overhead and increase understanding; to make it as easy as possible to create and maintain our code; to establish transparent accountability and responsibility. These are things that we will constantly review and reflect on, and potentially iterate - but now we have a solid foundation to build on.
How did you start implementing DevOps practices at your company? What are some lessons you learned, or some things you want to try next? Share your experience in a comment!