Categorising the Periodic Table of Testing to help scope projects

Categorising the Periodic Table of Testing to help scope projects

Why categorise?

There are several reasons, some more personal than others.  Entering my 52nd year of visiting Planet Earth my mind needs as much help as it can get.  I’m not saying I’m over the hill, but hills can be steep!  I have noticed the odd occasion where words escape me so best not to take chances on forgetting vital information. 

The Periodic Table of Testing has elements.  But not all elements are created equal!  Elements can be useful for specific projects but not appropriate in some cases for lots of different reasons.  Therefore it’s not going to be something that comes into consideration when thinking about project scope or test strategies too often but could be considered depending on the project and context.  Things like accessibility, test data and access however are areas that must be primary considerations whenever you start something new.  There are others that should be considered and that’s why I wanted to categorise the table both to help start those conversations and act as a bit of a guide without dictating. 

I’ve always believed that mnemonics and heuristics are excellent helpers for tasks in general and more so for repetitive ones.  Things we do over and over again have an annoying tendency to become brittle.  We forget one little step or thing and then it quickly becomes ingrained as if we always did it that way.  Having useful reminders around is helpful and especially ones that look cool when put up on a wall. 

What’s in each category?

So below is the first draft of Scoping Categories which I have classed as considerations.  It’s important to say that everything done on both the table and these categories are my opinion from my context.  I don’t expect everyone or indeed anyone to share my opinions so, as with the table itself, you may look at this and think, ‘that’s so wrong’ and you would be right! – In your context.  While I feel this can be a useful reminder and tool, I believe the power is in its ability as a useful guideline and being that reminder to help start conversations by prompting questions such as: 

·         What does Security mean to us in this situation?

·         How can we achieve living documentation for our project?

·         Are our risks defined enough to prioritise testing?

·         How can we use automation for testing?

·         Do Digital technologies apply?

·         How do I start testing for accessibility?

Scoping Categories for the Periodic Table of Testing

I’ve used a Must, Should, Could category definition in an attempt to reinforce the thought that these are considerations, not instructions.  While I have considered each item carefully and where I think they fit, I don’t think it would be useful to go through them all and explain why they are where they are.  For one you would be bored to death and this blog piece would be far too long!  But I do think I should try to explain a little why I believe you must consider those under that category.  I’ll (try to) make it brief, I promise.

Accessibility – Number one consideration from design principles to user base and the most overlooked.  Of course it’s there!

End to end – Focused testing is vital but if you don’t test end to end (whatever that means to your project) you can’t be confident everything ties together.

Discovery – It’s important in any plan or strategy to remember you will discover things that change your plans (put in Oscar Wilde quote, life is what happens when you’re busy making plans, for talk)

Exploring – I extolled the virtues of exploratory testing in this post so won’t go on again but this also covers general exploring of the system you will be testing.  Things that were not apparent from your up front information will be discovered. 

User Access / Permissions / Roles – If you think about how users will be managed you can’t help but think of the different states for your testing.  This will also give you an early indication of whether Personas will be useful.

Security – Linked to the above, even purely internal applications have potential security issues that need considering.

Analytics – From usage statistics, usability analysis and many others understanding how your system is being used and working will help direct your testing focus and efforts. 

Living Documentation – Lets face it, on one hand you have to document, on the other automated tests can be really valuable.  Living documentation gives you the best of both done well.  Lightweight but always up to date documentation.  Focused automated tests on the most valuable functions your system performs.  Win-Win!

Unit Tests – Almost goes without saying.

Testability – Of course! Even in the most technical builds we must ask, how will we test this?  Harnesses, scripts to reset data, fake services, it doesn’t matter what you use or how you apply it, it has to be testable or you have no idea what your releasing.  (in the talk point to Ash’s blog and other good sources)

Test Data – In context (yes I said it again) possible one of the most vital ingredients for testing.  Whether it’s unusual, short or long names (link to a cool article), different statuses or variants in information test data can be your best friend.  Just remember to scramble it if it’s a copy of live!

Monitors / Logs / Audit trails – We all love a good audit… don’t we?  Having logs and audit trails makes audit easy but having access to what your system is up to can help identify the strangest of behaviours.  Logs have so much value, find them, understand them and make sure your team gives them the priority (in context!) they need.

Risk Based Testing – aligning test plans to risk is a way of ensuring your testing is adding the most value to the project at any given time.

I excluded personal elements and methodologies on the basis that those are related to ways of working rather than tools to use in plans or strategies.  I’m happy to hear a different opinion though. For those who are eagle eyed yes the ‘first’ is version 1.1 as I made a number of formatting changes and tweaks while writing up and it’s for my own tracking.  

Note: The categorisation above is based on the latest version of the table, 1.7 which is below and on the Current Periodic Table of Testing and Archive

Periodic Table of Testing version 1.7

Introducing a 7th Thinking Hat

While I make absolutely no claims to be anywhere near the level of a genius like Edward De Bono I've found adding my own 'hat' to the Six Thinking Hats 

(http://www.debonogroup.com/six_thinking_hats.php) to be useful in making the technique more relatable to modern applications. 

Introducing, the 'Hard Hat' 

De Bono’s 6 hats is something I return to regularly. Some time ago I added a 7th ‘hat’ that I feel brings an element of the modern digital/mobile world to it.

A purple, hard hat to represent where the work/workloads are. For example: How is memory and CPU usage affected? What puts the most work on the system and needs to be monitored? I believe that this assists me to think about things in a more digital/SCAMI technologies way when using this technique.

I did try researching if anyone else had added their own hats and there are some variations with a gold hat for customers and a grey hat for consequences/cycles but nothing I've found like my purple one that pulls the digital/mobile world into the technique.

I've always found the mind map below useful when applying six hats and have added the purple hat. I hope Paul doesn't mind! I also hope someone else might find this useful. If you do please let me know. Thanks.

Session Based Testing, Exploratory Testing and my Questions technique

SB – Session Based Testing - Technique Element
Sub section – Approaches



Since Jonathan and James Back (satisfice.com/sbtm) documented their Session Based Test Management approach combining exploratory testing, risk management and ‘management’ oversight there has been a lot written.  Hopefully by now most people know the benefits of exploratory testing and some of the various methods of recording that activity.  In this article I hope to share a brief overview so we are on the same page.  A list of the main benefits and some minor drawbacks.  And finally the question technique I apply when using this in my day to day work.

Overview:
The Session Based Testing approach was developed for a project to allow their test team to ‘organise their work without obstructing the flexibility and serendipity that makes exploratory testing useful’.  They needed a way to keep track of what was happening, so they could in turn report back to a ‘demanding client’, while ensuring the time spent created the biggest return on investment. 

Essentially this is structured exploratory testing to help organise thoughts, capture questions and insights and allow rapid feedback.  Key elements to this approach include;
  • Each session is chartered (associated with a specific mission)
  • Uninterrupted (as much as is possible)
  • A template is used to record the details of the mission and findings
  • Reviewable ( a ‘report’ is produced to document findings and questions and the tester is ‘debriefed’) 
  • Time-boxed (with flexibility but generally no more than 2 hours)


In my opinion, there are a number of flexible points in the approach and tips that are worth being aware of, especially if you’re doing this for the first time;
  • I don’t think it matters if you call it a charter, mission or focus.  As long as you generally stick to your subject, although picking one might help when sharing for consistency.
  • Interruptions should be avoided if possible.  On occasion I’ve shut down Outlook and put my headphones on for these types of sessions.  At one time I even had a red flag on my desk which indicated do not disturb unless it was urgent. 
  • There are templates available or you can create your own like I did.  Again it’s useful for consistency to stick with one you’re happy with.
  • Reviewable.  A lot of focus is on ‘management’ reviews but team, peer or even self is fine, as long as what you find generates actionable insights rather than getting filed away never to add value.
  • Time-boxed.  If you start small with something very specific that’s a good way to get a feel for this technique and learn to focus.  I can sometimes be like the dogs in ‘Up’ and be distracted by squirrels!  Learn to note where the squirrels are and why you need to look at them later.



Question technique and template:
I admit that I often use this as a mental reminder, rather than something to populate, as my preference is to speak to a developer on my team immediately after a session to investigate or question.  (I don’t raise bugs I describe behaviour and in writing this, that’s probably what my next post will be on.  I’ll add a link to it on here when done.)  Only if this isn’t possible due to availability will I actually fill things in from the notes I have taken during sessions.  For me, this is a disposable document with a short shelf life used to capture, discuss, resolve (or not), and most importantly discard.

I’ve reproduced the template in bullet form rather than embed a PDF or word document, that way I hope it will be easier for you guys to take away and make your own.  When you get to the questions you might find, as I do quite often, you will remove some before you start as not applicable, or you won’t have filled some in when you’ve finished.  It’s supposed to be flexible like that but you should take a moment to understand why they are not populated or applicable to the session as that may prompt some other thoughts.

The template:
  • The Basics: Date; App/function under test (brief description); any other useful information depending on your context
  • Any dependencies vital to the testing (connections, files, data, hardware etc. this helps make sure you have them before you start)
  • Any information that is useful such as material/learning’s from previous sessions, personas to use, environments, tools etc.
  • Test strategy (a consideration of techniques you might use as a flexible plan is often more useful than no plan, but don’t be afraid to improvise as that’s half the fun and discovery may make your plan obsolete quite quickly)
  • *Metrics (see rant at the end of this post)
  • The questions: (with a brief reasoning for them)

o What do I expect? (even if it is something brand new I always have some expectations)
o What do I assume? (sets a context that I can query as I go)
o Are there any risks I should be aware of? (to execution, the system, helps anyone else reading have context)
o What do I notice? (behaviour; usability)
o What do I suspect? (things that I feel, not always based on facts but that I don’t want to lose)
o What am I puzzled by? (behaviour that doesn’t feel right)
o What am I afraid of? (high priority concerns about the item under test)
o What do I appreciate/like? (always good to have some positive feedback)
·       
   Debrief (originally between the tester and a manager there’s a checklist of questions on satisfice/sbtm.  My version is more often a conversation with the developer with questions or queries, but can also be with the product owner or stakeholder depending on what I find/context. I’m not saying don’t do this, rather do it only where it’s going to add value.

This post is getting a bit longer than I’d hoped but I feel it’s important to summarise the benefits and possible drawbacks of using this method so there’s a balanced view.

Pro’s
Con’s
Allows control to be added to an ‘uncontrolled process’
Can be harder to replicate findings as full details are not captured
Makes exploratory testing auditable
As a ‘new’ technique it has to be learnt
Testing can start almost immediately
Recording exploratory testing (rather than brief notes) can break focus / concentration if you're more worried about doing it
Makes exploratory testing measurable through metrics gathered
Time is required to analyse and report on metrics
Flexible process that can be tailored to multiple situations
Time is required to discuss/give feedback to potentially the ‘wrong’ person
Biggest and most important issues often found first

Can help explain what testers do to clients, stakeholders and the uninformed


Given all the above, if you have to justify exploratory testing, (notwithstanding you should be looking for a new job!) then using session based test management either in its original form or some hybrid version could be the convincer you’re looking for.  In the table above, ‘management’ will generally only see the Pro’s column which covers a lot of the things ‘they’ will worry about.  But seriously, look for a new job!




*Metrics: I personally don’t think these are useful for virtually anything (oh, more controversy!), but if you absolutely have to report back to someone, a manager who knows little or nothing about what testing really is for example, here are some metric examples.  How much time you’ve spent such as start/end times on actually executing testing; blocked; recording findings; actionable insights; questions/queries; potential problem; bugs; obstacles; screen shots or some other method of recording or documents to show any issues to help replicate them.  

Using Personas and the Relationships Between PToT Elements


PS Personas – Technique Element  
Sub section – Approaches

There are lots of relationships we have to consider in testing.  In this post, I’ll briefly discuss those relationships and how the Periodic Table of Testing can be used to map them.  Then share a real-life example of how using the personas ‘thought technique’ lead to using other elements on the table.

Any idea, technique or approach can only take you so far without some view of those things surrounding it.  Even a Hermit (a person wanting no contact with others), no matter how isolated, has relationships that need to be considered such as their surrounding environment. 

Understanding relationships can often be instrumental in identifying appropriate scope that helps ensure we deliver quality in our products.  The Periodic Table of Testing is exactly the same.  A Technique Element can have a relationship to a Testing Element and in turn a Testing Element can lead to (have a relationship with) a Technical (or any other) Element.

Real example:
Below is an example of how a Technique Element lead our team to a Testing Element that helped describe our relationship with our Customers.  By creating Customer Tours or Journeys we could then mimic the Customers behaviour, particularly when using Personas to navigate the system in a particular way.  Those Tours and Journeys then lent themselves to a Technical implementation through automated tests and the creation of Living Documentation.


Working on a project to create a customer portal to access mortgage account information was a great opportunity to introduce personas.  I’d read a lot about personas but the main takeaway for me was how they could be used to highlight key differentiators.  For our project, the key differentiation was the accounts status at the point of use.  I’ve read and seen a lot of information on personas and some recommend highly detailed and complicated outlines.  For me a lot of the detail in those were superfluous and didn’t add any real value.  For projects lasting years they could hold some worth but for me were distractions from the main point of them.

Back to the project and our main differentiation.  Mortgage accounts have several status variations including the account being up to date, in arrears, with an arrangement, in litigation, in possession and so on.  We used different personas to represent those different states. 

With input from the team we even used the names of the personas we created to represent variations in surnames to see how they would be displayed in the UI.  And so, Sally Steadman, Adam Thompson-Pritchard and Olivier O’Connell amongst others were ‘born’.  While the personas had genders, ages and key personality traits their development didn’t go much further as the status of their accounts was the key differentiation required.  Once we had the personas and a shared understanding of what each one meant we expanded the idea to other elements.  As well as creating customer journeys for them and noting the different information and help items they would see, we wrote feature files for them that became both our automated testing and in turn our living documentation. 

Thanks to our shared understanding we were able to create a ‘Preview’ version of the site and added fake services.  This meant you could register and sign in as one of the personas and explore or complete user journeys just as the Customer would.  We used these to execute our automated UI tests giving us stable responses.  Cool stuff I thought! 

However we might conduct testing and wherever our starting point; the relationships of different techniques and methods must be considered in our quest to investigate and add value to the best of our abilities. 

References:
Generic Testing Personas: http://katrinatester.blogspot.co.uk/2015/01/generic-testing-personas.html (great example of minimal personas)

Things I learned at the North West Exploratory Workshop on Testing




Over the weekend of March 18/19 I attended NWEWT2 at the former Atlantic Tower Hotel, now Mercure in Liverpool.  Following on from its inaugural outing last year this deep dive conference focused on the theme of ‘Growing Testers’ with its two primary questions;

  • As a Tester, how do you grow to keep up with the current trends in testing and development?
  • As someone responsible for leading Testers, how do you help Testers grow?

Each attendee gave a talk on their thoughts, experience, models and ideas for growing Testers.  The audience was varied from those who regularly present at conferences to those with only a few years experience in software testing.  Following each talk it was ‘open season.’  The facilitator would note who had questions using a card system I hadn’t encountered before.  Green was a question following a talk, yellow was a follow up question during a ‘green’ thread and red if you had to say something immediately.  Fortunately there was only 1 of those and despite being taxing on our hard working facilitator it worked really well in not only ensuring conversations flowed with no interruptions, but also making sure everyone’s points were heard.  It also encouraged us to go deep into subjects and challenge each others ideas, cordially of course. 

My presentation focused on the creation and on-going development of Test Xchange, our internal testing community over the last year or so.  I offered our emergent model of a risk based participant built backlog and regular agenda items of lightning talks, discussions and testing challenges.  Given I heard about this opportunity at fairly short notice I must confess to a bit of a ‘cheat’ here as most of the material came from a scheduled talk I am giving at Test Atelier in Leeds on the 9th of May.  So that was lucky!  (See related blog for more)

My main goals, besides sharing the great work our testing community has done in knowledge sharing, was to gain an understanding of how others were approaching tester development and bring back ideas we could use.  With coffee helping abate yawns from my 6am start driving from Yorkshire to Liverpool ready for a 9am start, we began.  So, what were the things I learnt over the two days? 

How does your garden grow?
There are so many ways to grow, gain knowledge and share information about testing.  The Testing Community is wide and diverse and there are many conferences, events and meet ups happening all round the country.  Online resources such as the Ministry of Testing are hubs of information not to mention the many talented individuals writing blogs.  Online magazines pass on knowledge and ideas while there are many excellent books on the subject.  YouTube is also a brilliant resource to watch the most ‘famous’ industry thought leaders share their views.  No matter your learning style there’s something for you out there.  All you have to do is find the time! 

I’m forever growing bubbles!
The Testing Community wants to help educators to pass on testing skills.  We all live in bubbles.  Family bubbles, social bubbles, community bubbles etc.  At the moment there’s an education bubble around software development that doesn’t have much room for the Testing Profession.  Course literature and Computer Science curriculums have little reference to testing as a discipline.  There were some passionate opinions expressed on reaching out to schools and Universities and offering the Testing Communities services to pass on knowledge of the ‘real world’ and testing skills, philosophies and passions.  The four-hour tester (http://www.fourhourtester.net/) is a project that focused on simple exercises to teach some of the skills and thought processes needed by testers.  I particularly like the Mary had a little lamb heuristic.  Give it a try.  The weekend testing community (www.weekendtesting.com) recently undertook a similar activity to create a testing syllabus.  Also well worth a read.  The question is now, am I brave enough to stand up in front of impressionable younglings and promote our profession?  I’ll get back to you on that one… What is for sure is that more efforts are needed to get out of our bubbles and into others.

How did I get here?
Very few people plan to be a tester.  Going round the table and aptly demonstrated by ‘Bullseye or The Testing Wheel’ (presentation available on slideshare.net by Ash Winter), career models are perceived as linear but very rarely are.  Despite our best plans be it through assignments, job change or sideways movement, most people swirl around quite a bit on their journey.  My own path to here has included roles such as logistics manager, auditor, director, business intelligence manager and test team lead to name a few.  While I had some version of testing in other roles it was by being asked to perform testing on a software system that peaked my interest. Then when I found out such an interesting and fun activity was a real full time job I was hooked.  What I didn’t realise is that there are so many similar stories of people moving in many varied directions on their road to becoming testers.  Maybe, like the saying about love, you don’t find testing, it finds you?

Mums have the best sayings!
You can’t stick your apples on other people’s trees!  Jit Gosai (test engineer at the BBC) talked to us about how he saw Mark Zuckerberg had floated Facebook and made billions at 27.  He was 27.  Did that make him a failure?  Of course not, but he vowed to do more, learn more and share what he found with others.  Later he was frustrated his sharing wasn’t landing how he’d like.  His mum told him about apples.  You can influence, you can give knowledge, you can lead, but you can’t make your ideas be someone else’s.  That doesn’t mean you should stop trying to share with those around you. No matter how annoying they are! 

So that’s some, but far from the only things learnt over an extremely enjoyable conference.  I’ve a decent sized list of people, models and sites I want to look into.  Another decent list of ideas for discussions, challenges and things we can do at Test Xchange.  Even some blog ideas.  And maybe, if I’m brave enough and survive Test Atelier, some more talks too. 

Leeds Testing Atelier May 2017

The 4th edition of the free test conference held in Leeds took place at the Wharf
Chambers in Leeds on the 9th of May.  The website is here.  The two track conference had speakers, workshops and panels and was well attended by both developers and testers.  Some of the conference was filmed so hopefully some of the sessions will be available at some point. 

Below is the schedule and notes on the sessions I attended and a little feedback gathered from discussions. 

Track 1 – Hipsters

09.30 – What not to do, a guided tour of unit testing – Colin Ameigh
Colin took us on a journey through unit tests using PHP as the base of the testing pyramid.  In his experience a lot of unit tests were rotten and not maintained or even unit tests at all.  He pointed to the single responsibility principle being absent as the main culprit.  As a minimum they should run everywhere and also in isolation.  Another insight was that where TDD was used, the third step of ‘refactor’ was often missed meaning over time the tests themselves lost their value. 

10.00 – Testing the waters – Rosie Dent-Brown
While I didn’t attend there was a ‘buzz’ around Rosie’s use of Agile at home and assigning ‘roles’ to those around her!  Sound fascinating!

11.00 – Colleagues to Community – Ady Stokes
My talk on our journey building a test community included some of the test challenges we had done sharing our experience to hopefully inspire others and encourage them to come and help us grow.  The title has a link to the deck on SlideShare and here's the link to the talk on YouTube

13.00 – TDD using Excel – Dave Turner

14.30 – Panel – Generalising Specialists
This was an interesting discussion based on both T and Pi ("π") shaped testers.  While the discussion covered many things, below I’ve tried to summarise the key points made.
  • Generalising skills can make you more valuable to a company
  • Generalisation shouldn’t be at the expense of your ‘deep’ skill
  • The role of ‘pure’ specialist is still valuable (E.g. Penetration; Performance; UX/UI; Accessibility)
  • ‘Pure’ specialist as a consultant or service was expressed as the most powerful use



Slightly away from the core topic but I believe still valuable to share I think, was the value a tester adds to a team or project.  When involved through the whole process, from ideas to delivery; the statement, ‘testers are the glue that binds the stages together’ was expressed.  I thought this was an interesting metaphor along with, ‘testers can also be the conscience of the team’ making me thing about my role in a different way. 

15.30 – Defend the Indefensible & PowerPoint Karaoke
This was an interesting idea to say the least.  An ‘indefensible’ statement was put up on the screen and the victim, volunteer had 30 seconds to defend it.  As an example, one of the statements was; ‘Testing is dead and pointless as everything valuable can be checked by automation and users!’ 

Track 2 – Nerds
09.30 – Docker as a tool for testers – Serena Wadsworth

10.00 – Power of pairing – Lee Grubb
Lee offered us his experience of the power of pairing.  He went through the traditional techniques of set up and Driver/Navigator roles.  After explaining the benefits through some of his experiences he explained some of the other types of pairing including Strong Pairing.  This is where the navigator explains their ideas and the driver interprets.  To understand what your thinking so well you can bring it to life through someone else I thought was a powerful tool.  Although there are lots of styles and combinations, he mentioned dev/dev; dev/ tester; dev/DBA amongst others, his final piece of advice was not to limit yourself and experiment with what works for you.

11.00 – Testing is DevOps – Alex

13.00 – Testing without Testing – Algirdas Rabikauskas, Kristina Valiune and Peter Ferguson
This workshop sought to show us some of the exercises they had done in their peer community.  The time was split into two challenges, the first being to identify an object through clues; Or, ‘understand requirements’.  This took the form of being given a single word, the premise that a ‘rock star’ had a requirement / rider for ‘something’.  You had three minutes to come up with questions and one minute with their agent who could only answer yes or no.  Our word was ‘stick’ and after a couple of rounds where we established it was made of wood, 30 cm long, narrow and cylindrical we finally reached drum stick.
The second challenge was spot the difference with a twist.  There were three duplicated images of a city bridge, paragraph and dice grid.  For two of those the second image had be inverted or turned upside down.  We chose a method of peer review (code review) by assessing an image individually, then passing it on.  After the disappointing news that we had missed one difference, we mobbed the remaining image until we found the final item. 
In summary Algirdas said that these games were helpful in reducing assumptions, critically thinking about problems and developing team techniques and bonding.  Having found the activities both challenging and fun and enjoyed the interactions with my new found team I’d have to agree with those comments.

14.30 – Panel – Continuous Delivery

15.30 – Games including Dysfunctional Scrum, DevOps ball game, TestSphere 

The Periodic Table of Testing, an introduction and history

Firstly, thank you for taking the time to read my blog.  If you have any comments feedback or questions I'm eager to hear them so please get in touch. 

I'll be using the blog to share my thoughts on testing, feedback on events I attend and to share the things I find most useful. 


But the primary reason for the blog is to document my investigations and journey's through the world of testing using my Periodic Table of Testing.  



Periodic Table of Testing, a representation of the elements of testing in the style of the periodic table

Over time I hope to navigate through the table as I ask myself, do I understand what this is, how it works and how/when to implement this in the projects I work on.  After all, theory and ideas are all well and good, but if you can't then apply them, what good are they? 


The table is an ongoing work in progress and I expect it to change over time.  It could grow, have elements removed or even have a new sections added.  For example, I'm not sure if interpersonal skills should have its own column or section as there's elements in the table but they are so important perhaps I should highlight them? 


The table takes its inspiration from many sources.  I originally created the Periodic Table of Data back in 2011 while working in a Business Intelligence role as a way to see how new data could fit into our existing framework.  It was also a way to understand what we already had. 


The idea was published in the Testing Planet in March 2012 and the article is available on the Ministry of Testing website



Periodic Table of Data, a representation of the elements of testing in the style of the periodic table

While I've been playing around with this idea for some years there have been a number of recent influences I'd like to highlight in spurring me on to finally publish.  I attended NWEWT in March 2017 which is the North West Exploratory Workshop on Testing.  The workshop was on growing testers and I thought this could in some small way help testers navigate the world of testing.  Or even be used as a visual heuristic of considerations for projects.  Another contributor was Ash Winter's Wheel of Testing and how he used it as a tool for the testers he managed.  


There are so many more influences I'd be here all day but a quick mention to Chris Pearson and Andy Lawrence at Computershare for supporting my crazy ideas to do stuff and beginning my education on all things Agile respectively.  


So that feels long enough for an introduction, please leave any comments or thoughts below.  Thank you.