Books community Conferences

5 books / 5 slides / 5 minutes

At the last Brisbane Software Testers meetup I volunteered to do a 5 minute lightning talk. Since I’ve read a lot of books lately I thought I would share what I had read and some of the key snippets and set my set a challenge of talking about 5 books using 5 slides in 5 minutes.

Unfortunately some of the other volunteers for lightning talks withdrew so I had a longer window and ended up talking way longer (including some bonus slides about Think Like A Freak).

I am keen to try this again using 5 books I have since read to see if it’s actually possible to communicate this amount of information. My slides are below and are also available in PDF format (accessible).

5 books - 5 slides - 5 minutes (1) 5 books - 5 slides - 5 minutes (2) 5 books - 5 slides - 5 minutes (3) 5 books - 5 slides - 5 minutes (4) 5 books - 5 slides - 5 minutes


community Conferences Software Testing

Free yourself from your filters

One of the most interesting articles I have read recently was ‘It’s time to engineer some filter failure’ by Jon Udell:

“The problem isn’t information overload, Clay Shirky famously said, it’s filter failure. Lately, though, I’m more worried about filter success. Increasingly my filters are being defined for me by systems that watch my behavior and suggest More Like This. More things to read, people to follow, songs to hear. These filters do a great job of hiding things that are dissimilar and surprising. But that’s the very definition of information! Formally it’s the one thing that’s not like the others, the one that surprises you.”

Our sophisticated community based filters have created echo chambers around the software testing profession.

“An echo chamber is a situation in which information, ideas, or beliefs are amplified or reinforced by transmission and repetition inside an “enclosed” system, often drowning out different or competing views.” ~ Wikipedia

I’ve seen a couple of echo chambers have evolved:

  • The context driven testing echo chamber where the thoughts of a couple of the leaders are amplified and reinforced by the followers (eg. checking isn’t testing)
  • The broader software testing echo chamber where testers define themselves as testers and are only interesting in hearing things from other testers (eg. developers are evil and can’t test)
  • The agile echo chamber where anything agile is good and anything waterfall is bad (eg. if you’re not doing continous delivery you’re not agile)

So how do we break free of these echo chambers we’ve built using our sophisticated filters? We break those filters!

Jon has some great suggestions in his article (eg. dump all your regular news sources and view the world through a different lens for a week) and I have some specific to software testing:

  • attend a user group or meetup that isn’t about software testing – maybe a programming user group or one for business analysts: I attend programming user groups here in Brisbane;
  • learn to program, or manage a project, or write CSS.
  • attend a conference that isn’t about context driven testing: I’m attending two conferences this year, neither are context driven testing conferences (ANZTB Sydney and JSConf Melbourne);
  • follow people on twitter who you don’t agree with;
  • read blogs from people who you don’t agree with or have different approaches;
  • don’t immediately agree (or retweet, or ‘like’) something a ‘leader’ says until you validate it actually makes sense and you agree with it;
  • don’t be afraid to change your mind about something and publicize that you’ve changed your mind; and
  • avoid the ‘daily me‘ apps like the plague.

You’ll soon be able to break yourself free from your filters and start thinking for yourself. Good luck.


Software testing conferences 2014

It’s about that time of year where I am starting to think about which testing conference I would like to attend next year. I use my professional development budget to attend one conference a year which can be almost anywhere in the world. I will propose a talk at my chosen conference, which, if accepted, is a bonus to attend. Here’s a short list of conferences I am considering for next year:


  • CAST 2014: August 11-13 NYC, USA – 1 day tutorials/2 days conference – one stream – bonus points for being in NYCsubmissions close January 10, 2014
  • CITCON Oceania 2014: February 21-22 Auckland, NZ – 2 days unconference – multiple groups – been to one before which was good but I personally don’t like the unconference format with no set speakers
  • Let’s Test Oz 2014: September 15-17 Blue Mountains, Australia – 3 days conference – multiple streams – concerned it will be dominated by certain opinionated keynote speakers and not practical enough for my liking, location is an issue for me traveling from Queensland submissions close January 15, 2014
  • Magma Conf 2014: June 4-6 Manzanillo, Colima, Mexico – 3 days conference – not exactly a testing conference but a web development conference with talks about testingsubmissions close December 30, 2013
  • EuroSTAR 2014: November 24-27 Dublin, Ireland – 4 days conference/workshops – would be cool to visit Irelandsubmissions close February 14, 2014
  • QCon Conferences 2014: London, NYC, San Francisco – Various Dates – more focussed on software development/agile but testing content

Still To Be Announced

  • Google Test Automation Conference (GTAC) 2014: Date TBC, Location TBC – usually 2 days – attendance by application/invitation – I went to this last year in NYC and it was very smooth, would go again
  • Selenium Conference 2014: Date TBC, Location TBC – usually 2 days conference + 1 day workshop – I attended in 2012 in SF and it was quite good but obviously just about WebDriver and not testing. Not sure if it in Europe in 2014 as it was in 2012
  • STP Conference 2014: Date TBC usually October, Location TBC – 4 days – submissions close April 4, 2014

Are there any software testing conferences that you thoroughly recommend for next year?


I will attending ANZTB Test 2013 Conference in Canberra

I will be attending the ANZTB Test 2013: Advancing Testing Expertise Conference in Canberra, Australia on Thursday 6 June.

I am looking forward to hearing about different testing topics include WCAG accessibility testing, as well as meeting fellow Australian testers.

Tickets are cheap at $300 and full details are available online.



GTAC 2013 Day Two Doggy Bag

The main theme for today’s talks was Android UI automation with various approaches demonstrated.

Jonathan Lipps from Sauce Labs
Jonathan Lipps from Sauce Labs

Mark Trostler from Google started with a technical talk on JavaScript testability. He emphasized using interfaces over implementations which means you can change the implementation whilst still testing the interface. He concluded by emphasizing writing tests first naturally results in testable code.

Thomas Knych, Stefan Ramsauer and Valera Zakharov from Google gave a highly entertaining presentation about Android testing at scale. This was one of my favorite talks of the conference. They highlighted that insistence on automated testing using real devices is inefficient and problematic, and that you should first run a majority of tests on emulators which finds a majority of the bugs. This is something I have been saying for a long time and it was refreshing to hear it from a Google Android team. Ways to speed up Android emulators include using snapshots for fast restores, as well as using x86 accelerated AVDs. Interestingly, the Google Android team ran 82 million Android automated tests using emulators in March alone (there are approx 2.5 million seconds in March) with only 0.15% of tests being categorized as flaky. This is partly due to using a Google only automated testing tool for Android called Espresso. Another key takeaway was if you are using physical devices then don’t glue them to a wall or whiteboard. The devices get hot, melt the glue and get damaged as they hit the floor.

Guang Zhu (朱光) and Adam Momtaz also from Google talked about some historical approaches to Android automation (instrumental, image recognition and hierarchy viewer) and how to use features in newer Android API versions (16+) to automate tests reliably.

Jonathan Lipps from Sauce Labs demonstrated the very impressive tool Appium which enables iOS and Android automation using WebDriver bindings allowing you to use your language of choice with the promise to write once and run across the two platforms. This isn’t exactly true as the selectors will be different but these can be defined in a module so your test code is readable. Jonathan explained the philosophy behind the tool and even demonstrated a quick demo running against the new FirefoxOS to demonstrate its flexibility. Some of the limitations mentioned were you can only run one iOS emulator per physical Apple Mac which limits continuous integration scalability. It was overall a very impressive polished tool.

Eduardo Bravo from the Google+ team gave an interesting lightning talk about hands-on experience in testing Google+ apps across Android and iOS. They use KIF for iOS testing. Eduardo was quote worthy with such gems as “flaky tests are worse than no tests” and “don’t give devs a reason not to write tests“. The hermetic theme was recurrent with the ongoing endeavor to reduce flakiness by using hermetic environments with known canned responses to make tests deterministic. A very enjoyable talk.

Valera Zakharov from the Google Android dev team discussed an internal tool Espresso which makes Android tests much more efficient and reliable, and with less boilerplate code. My only complaint: don’t demo an awesome tool that isn’t open source and available for others to use.

Michael Klepikov from Google talked about using the upcoming ChromeDriver 2 server to access performance metrics from the Chrome Developer Tools. He demonstrated some fancy looking results generated by I don’t believe you need ChromeDriver 2 to do this though, the W3C navigation timing spec provides performance metrics right now.

Yvette Nameth and Brendan Dhein from the Google Maps team discussed the challenge of testing large Google Maps datasets, demonstrating a risk based approach: eg. Ensuring the Eiffel Tower is accurate is important, but the accuracy of your Gran’s farm is not.

Celal Ziftci and Vivek Ramavajjala from the University of San Diego presented their findings of work at Google to automatically find culprits in failing builds. This was a highly interesting talk about creating a tool to analyze multiple change sets in a build and work out which is most suspicious using a couple of heuristics: number of files changed and distance from root. The tool originally took 6 hours to perform an analysis but they reduced this to 2-3 minutes using extensive caching. The tool they developed allows extensible heuristics to allow additional intelligence such as keyword analysis.

Katerina Goseva-Popstojanova talked about academic analysis of software product line quality. She highlighted that open source software projects are the Promised Land for academia in that the code is fully accessible and can be used for academic analysis and research.

Claudio Criscione from Google discussed Cross Site Scripting (XSS) vulnerabilities and some automated solutions to checking for these.

During the afternoon I went for a tour of the Google New York City office here in Chelsea. All I can say is wow. The view from the 11th floor roof top balcony was very nice too (see pics below).

Google NYC Balcony

Google NYC View

A very enjoyable and smooth conference and well done to all involved organizing it.


GTAC 2013 Day One Doggy Bag

I am currently in New York City for the Google Test Automation Conference (GTAC) at the Google NYC office in Chelsea (the second largest Google office worldwide).

Google NYC Office

Here’s my key take-home notes from today’s session:

  • Ari Shamash from Google talked about the consistent issue of non-deterministic (flaky) automated tests and how Google use hermetic environments to highlight these tests. This involves creating 5-20 instances of an application and running tests repeatably to identify inconsistent results.
  • James Waldrop from Twitter discussed their ongoing strive to eliminate the fail whale through performance testing. He discussed production testing techniques: canaries (small subset of users provided new functionality), dark traffic (use existing app but send some traffic to new version and throwaway response), and tap compare which is comparing dark traffic to actual. He then talked about his tool homegrown performance tool Iago (commonly called Lago because of the capital I in sans-serif fonts).
  • Malini Das and David Burns from Mozilla discused automated testing of the FirefoxOS mobile operating system and how it uses WebDriver extensively to test the inner context (content) and outer context (chrome) of FirefoxOS. They have a neat Panda Board (headless devices) device pool which can cause non-determistic test failures due to hardware failure. One key point was how important volume/soak testing as people don’t turn off their phones – they expect them to run without rebooting them or turning them off.
  • Igor Dorovskikh and Kaustubh Gawande from Expedia discussed Expedia’s approach to test driven continuous delivery. Interestingly they use ruby for their automated integration and acceptance tests even though the programmers write their web application in Java. Having a green build light is critical to them which means a failed build rolls back automatically after 10 minutes: giving someone 10 minutes to check in a fix. To enable this, they have created a build coach role which is shared amongst the team, even project managers and directors can take on this role to keep the build green. They also stated that running mobile web app tests on real devices and emulators (using WebDriver) has been beneficial, as well as standard browser user agent emulation to get around issues with multiple windows for features like Facebook authentication.
  • David Röthlisberger from YouView demonstrated automated set top box testing which uses a video capture comparison tool that compares expected images – similar to Sikuli. These images are stored in a library should the application change in look and feel.
  • Ken Kania from Google discussed ChromeDriver 2.0 and its advanced support for mobile Chrome browsers.
  • Vojta Jina from Google demonstrated Karma JS Test Runner (formerly contentiously known as Testtacular) and how it runs JavaScript tests in real browsers as you write code. Neat stuff.
  • Simon Stewart from Facebook talked about Android application testing at Facebook. Originally Facebook used Web Views in Android & iOS which enabled frequent deployment but resulted in a terrible user experience. They have since started developing native applications for each feature. Interestingly, every feature team has responsibility for all platforms: web, mobile web, Android and iOS. This enables feature parity across platforms. Facebook use their own build tool BUCK which enables faster builds. Simon also pointed out that engineers are entirely responsible for testing at Facebook: they have no test team, no QA department or testers employed. Some engineers are passionate about testing, like some others are passionate about Databases. Dogfooding is very common amongst engineers which results in edge cases being discovered before being released to Production. A highly entertaining talk.

Google really know how to run a conference. It’s hands-down the smoothest one I’ve attended; from the sign-in process to the schedule being adhered to. They even have stenographers and have sign language interpreters.

Google Stage

Oh, and NYC is great. I went to the top of the Empire State Building yesterday: the view to lower Manhattan was amazing.

Lower Manhattan from the ESB

Conferences Watir

On why I left the Watir project

“You will find that it is necessary to let things go; simply for the reason that they are heavy. So let them go, let go of them. I tie no weights to my ankles.”

~ C. Joybell C.

A number of people have asked me about why I left the Watir Project last year, and up until now I haven’t been comfortable explaining why. But that was then and this is now.

There were two reasons why I left the Watir Project. The first is because of a particular member of the Watir team who likes to call himself the ‘Project Director’. I co-organized a conference in Austin with this person last year, for which I organized a Minesweeper Contest which was advertised as part of the conference. I wrote a presentation on my robot which I developed with a colleague here in Brisbane, I even had some entries from other attendees. I rehearsed the presentation here in Brisbane and myself and my colleagues were excited for me to be presenting this is Austin.

Whilst I made it clear numerous times that I wanted to present this, the co-organizer provided me no opportunity to do so. He ‘directed’ the schedule, and when it came to the end of the conference and there wasn’t any time left, he said it was my fault for not proposing it as an ‘open space’ topic even though it was an long advertised component of the conference, and he gave me no opportunity whatsoever to present it.

I was so embarrassed I still haven’t told anybody here in Australia that I didn’t actually present my Minesweeper Robot in Austin. When people asked me how it went, I had to lie and tell them it went well because I was so embarrassed. All because of one person controlling the agenda.

I hate being on bad terms with someone, it’s just not who I am, so I made an effort to recently contact this person to discuss and see whether a year in time has made him willing to talk about the situation and how we can move forward. He rudely dismissed me and didn’t want to talk to me so I take that as he hasn’t. That’s why I am finally comfortable writing this post.

Letting things go

“Some people believe holding on and hanging in there are signs of great strength. However, there are times when it takes much more strength to know when to let go and then do it.”

~ Ann Landers

The second reason I left Watir was that I believe things have a time and place, and when that time and place is up, it’s time to let go. Like your favorite pair of jeans you wear until they are faded to almost white and have holes in the crutch, it’s time to let them go.

The same applies for open source projects. You can’t keep contributing to an open source project forever. New, more enthusiastic, people come along and it’s hard but you need to let go and let them take over the reigns. You must. That’s how you can avoid having an open source ‘Project Director’ who hasn’t sent a project related email or written a blog post or line of code for almost a year.

I’ve let the Watir project go, I’ve let this person go, and I am a much happier person for it.

As a bonus, I presented my Minesweeper presentation locally here in Brisbane and it was very well received. Austin missed out.

Automated Testing Conferences Test Automation

Going to GTAC in NYC

I’m excited to be able to attend my first ever GTAC in NYC in April.