Agile Testing Days Berlin, The Second Day

No Comments

Key Note Lisa Crispin – Are Agile Testers Different?

Visited and written by: Andreas & Thomas

The answer to the question from the tile of this keynote is: Yes! As already partly in the tutorials yesterday Lisa has again emphasized how important it is that there are no silos in agile teams. Instead it is all about solving the given task (developing quality software) in a collaborative mode. To do so (agile) testers are as important as (agile) developers and all other potential members of the team.

Lisa Crispin: “We (testers) don’t break code, it comes to us already broken!” (im Original von Cem Kaner)

All in all Lisa’s keynote was very inspiring as she put a strong focus on agile values and ideas and how they help having more success in software projects and in the end also more fun.

Luckily other visitors of the conference are also blogging and thus we can keep it a bit shorter here – as this will anyway be a huge blogpost – and link to some posts on Lisa’s keynote.

Open Source Agile Testing by Péter Vajda

Visited and written by: Andreas

This session should have been called “How to not implement Scrum”. Péter Vajda presented his experiences as agile coach in the Nokia Maemo project, the linux based operating system for mobile devices of the future, like the well known and valued internet tablet N900.

Two years ago, 200 employees were part of the project, by now they grew to nearly 1000 people and 50 teams (which implies a teamsize of 20). While the gap between open source communites and a mobile device manufacturer has to be bridged somehow, it was decided to go agile and introduce Scrum, use XP practices and Lean thinking.

Beside some well known facts (scrum teams should be co-located), some interesting facts were shared. For example they realized in Sprint 20 that unit tests alone do not suffice, automated acceptance tests were nice but not existing. What to do? Take a break and create acceptance tests for all functionality from the last 20 Sprints or leave that functionality uncovered? The compromise was good from the content, having a “refactoring backlog item”, but also if you have “system user stories” you should always make clear what the value is. And the value of that is not the test itself, but that you gain confidence and flexibility to integrate new features more easily.

The demo, which showed how to test a Qt calculator with cucumber (the real new toy / gadget could not be disclosed, unfortunately) at last brought some new insight: With cucumber you can write behaviour tests, which not only support the format “given / when / then”, but also support a postfixed table with additional test data. An idea I was exposed to already at Agile 2009 in Chicago, but there in the context of FitNesse: One problem with given-when-then… the answer… “Other Examples Include”. I think that this is really a great idea to describe behaviour in the appropriate context and provide additional test data / examples.

Agile Testing: A Report from the Frontline by Joke Hettema und Ralph van Roosmalen

Visited and written by: Thomas

Joke is starting her presentation with a short introduction why the company she is working for has switched to agile methods for their projects. Here a common pattern was repeated that traditional methods could no longer give the required visibility as the projects became more complex. To solve this Scrum has been introduced as an agile method and mixed teams have been setup with developers, testers and architects in one Scrum team. Up to here not too much new I would say.

With respect to Testers and Testing some patterns from the tutorials and Lisa’s keynote are repeated: No explicit testteams anymore, shared responsibilities for developing and testing the software. But one special thing here is setting up a “Scrum of Scrums” for all testers of all the different teams only. Of course information sharing is important, but for me this goes into the wrong direction as it again draws a borderline between testers and developers instead of bringing them together. This borderline is getting even bigger by establishing special Sprint planning meetings for testers only where testing tasks are planned independently from the (other) development tasks. After a question from one visitor it also turns out that developers are only writing unit tests, but for all further testing they rely on the testers to implement those. Again this sounds as if there is still quite some borderline between testers and developers inside the teams.

Then Joke describes the problem of “Mini Waterfalls”. Those are happening still too often for them due to the fact that developers are not checking in their work frequently enough, pilling up bigger chunks of work that are then committed in one go. On this topic I can only recommend Fabian’s very good post on committing frequently that he has written some weeks ago.

When it comes to sprint planning their is again a level of indirection I personally do not like. This comes due to the fact that only some “Functional Specialist” is talking with the product owner and is then talking to the team instead of having the team talk directly with the product owner. A similar question came up yesterday in Elisabeth’s tutorial and here statement was clearly that as many team members should be “allowed” to join the sprint planning meeting as possbile. In addition I personally doubt that it is a good idea to have domain knowledge bundled only in a specifc role. But we are slowly leaving the area of agile testing, so back to topic.

Joke continues by showing the documents that are produced and the tools that are used for testing, which – again – does not make me feel too agile: Test Strategy, Risk Matrix, Test Approach Document, Checklist, Session Tester and Bug Reporting Software. With respect to the “Definition of Done” she points out that it is very often the “task” of the testers to tell the rest of the team that the definition of done is violated while the rest of the team is arguing against this. It also seems that the trust in the definition of done is limited as their are additional sprint only related to release testing where then a lot of manual tests are executed. Sounds a bit like classical release testing which would also fit that there is a dedicated role of a test manager.

All in all I think the presentation is a good example that – at least in this case – there is still some way to go at the “frontline” to get rid of the existing QA methods and going even more into the direction of agile testing.

BDD approaches for Web Development by Thomas Lundström

Visited and written by: Andreas

At first Thomas Lundström layed some basics for BDD and set the context for the following live demo. This first part of the presentation took a little bit too long for my taste, you could expect that on a conference for agile testing, everybody has a rough understanding what behaviour driven development is. And then, some parts were explained differently to what I consider them to be correct, for example the “golden triangle”, the fusion of requirements, tests and … done?! To the best of my knowledge, and this is also how Gojko Adzic explains it in his book “Bridgin the Communications Gap”, executable specifications union requirement, test and examples (to explain the requirements). The presented combination of cucumber and webrat, ruby based tools, were capable of testing a simple guest book webapp. Ok, user stories are called features in cucumber (or rather in Gherkin, the DSL), and acceptance tests are not called acceptance tests, but scenarios, but these are just minor issues. Webrat does not support Ajax, so it is really only usable for simpler use cases, but cucumber also supports Selenium and HtmlUnit.

A very important point was underlined by Lundström, and I don’t want to miss amplifying it also here. It is important that automated acceptance tests / executable specifications describe the rules that should be tested – including the context and prerequisites, but without the concrete steps / the script how to get there. Creating the context is what the fixture should do. Lundström gave the following example and differentiated between the imperativ style (the script) and the declarative style. By restricting the test on the logic that should be tested, the declarative stype is much more robust. An additional advantage is in my opinion that it opens the possibilities to automatically refactor the fixture (the binding between the test and the code under test), if it is written in the same programming language as the system under test. An impossible thing, when the context creating logic is part of the test script.

Imperative Style

Story: Animal Submission
  As a Zoologist
  I want to add a new animal to the site
  So that I can share my animal knowledge with the community
  Scenario: successful submission
  Given I'm on the animal creation page
  When I fill in Name with 'Alligator'
  And select Phylum as 'Chordata'
  And fill in Animal Class with 'Sauropsida'
  And fill in Order with 'Crocodilia'
  And fill in Family with 'Alligatoridae'
  And fill in Genus with 'Alligator'
  And check Lay Eggs
  And click the Create button
  Then I should see the notice 'Thank you for your animal submission!'
  And the page should include the animal's name, phylum, animal class, order, family, and genus

Deklarative Style

Story: Animal Submission
  As a Zoologist
  I want to add a new animal to the site
  So that I can share my animal knowledge with the community
  Scenario: successful submission
  Given I'm on the animal creation page
  When I add a new animal
  Then I should see the page for my newly created animal
  And the notice 'Thank you for your animal submission!'

Quelle: Imperative vs Declarative Scenarios in User Stories

Introduction to Robot Framework von Pekka Klärck

Visited and written by: Thomas

Even though Andreas was telling me before the session a thousand times that I anyway know everything about Robot Framework and there is no need for me to visit this track, I could not resist to see Pekka in action and getting some first hand information on the latest features in Robot. Pekka did not disappointed me.

First of all a few basic facts: Keyword-driven, based on Python, supports Java using Jython. Support for other languages via XML-RPC remote interface. Robot is Open Source under the Apache 2.0 licence. The tool evolved out of a thesis work that was started at Nokia Neworks and is today still supported by Nokia Siemens Networks. But I think this is enough of repeating the basic stuff as the slides are anyway available here :-).

The presentation continues with the explanation of keywords. Of course here Andreas is right that there is not too much new for me, as I am dreaming already in keywords. It goes on with a very short comparison with FitNesse and Pekka points out that in Robot everything is a keyword, which makes things very easy to understand in comparison to FitNesse where there are different kind of fixtures one has to understand. The example that is shown afterwards looks quite familiar as Elisabeth was using it as well for her tutorial yesterday. Now Pekka is emphasizing a great strenght of the Robot Framework, which is combining lower-level keywords to more powerful higher-level keywords. A feature in which I also personally see as a very big strength of the tool. By the way, it is quite facinating following the presentation on some Unix machine where all the menus and related items are completely in finnish language.

Pekka Klärck: “I have some confidence that really every tester – even without the slightest programming skills – can combine existing keywords to build new testcases.”

Pekka continues by starting Robot from a shell. It is pointed out that this is especially important for integrating Robot with CI-environments, which is really an easy task to do with Robot. Then the reporting and statistik capabilities of the tool are shown, especially the tagging. This is building a bridge to the tutorial yesterday where there was the question how to prevent that the build fails due to already implemented tests for which no functionality exists yet. So tagging is one half of the answer together with the possibility to define that tests marked with certain tags are not critical. Thus when those tests are failing the build is not marked as broken, which is not only a wanted effect in ATDD, but more a mandatory thing any agile testing tool has to support.

The graphical editor RIDE has also made great progress. It is now possible to extend keywords using CTRL-Space. RIDE shows then all keywords starting with the given characters together with the documentation of those keywords. This is really looking very promising and I personally think that the “final” version of RIDE will be another big boost in the usability of the Robot Framework. The same is true for the new possibility of “Executable Specifications” which will be worth a blog post of its own still.

After the presentation and during lunch there was some time for a quick chat with Pekka, which I was very happy about as we worked at the same company at the time Robot was developed and Pekka was visiting us a lot to promote the tool. It also showed me again that the development of Robot is very actively happening and I am looking forward to the things happening with RIDE and and the ATDD feature (and everything else).

Key Note Elisabeth Hendrickson – Agile Testing, Uncertainty, Risks and Why It All Works

Visited and written by: Andreas & Thomas

First of all Elisabeth really has a very good and active presentation style. Her presentation was mainly about what she calls the “7 Key Principles of Agile Testing”. Those are:

  • ATDD
  • TDD
  • Exploratory Testing
  • Automated System Tests
  • Automated Unit Tests
  • Collective Test Ownership
  • Continuous Integration

Anthony Gardiner has already written more on those in his Blog Post ob this keynote.

Elisabeth Hendrickson: “The only way to tell if you are agile is if you are successfully delivering software.”
Elisabeth Hendrickson: “Exploratory Testing is a dicipline. It is not just blindly banging on the keyboard.”

I am not sure whether or not it is some kind of sign that I was initially writing “keyword” instead of “keyboard” in the quote above ;-). But back to the keynote: It sontinues with CI and an explanation that tests are artefacts that belong to the project. I fully support this and I find it hard to understand that there are still projects that do not have there tests and test documents under version control same way the code is under version control.

Agile Quality Management – Axiom or Oxymoron? by David Evans

Visited and written by: Andreas

David Evans presented some traditional testers’ concerns that are moving into an agile environment and put them into a different light. For that he used an interesting structure, every concern was formulated from the pessimistic perspective (as oxymoron) and from the optimistic perspective (as axiom). Which of the two is the right one, I leave that as a homework to the reader

  1. Developers Test Their Own Code
    • Oxymoron: Marking your own homework
    • Axiom: Unit tests are the first of many qualtiy strategies
  2. Developers and Testers are co-dependant
    • Oxymoron: Independence is fundamental to quality governance
    • Axiom: whole team share the goal of creating quality software
  3. Developers know the acceptance tests (are defined and agreed up-front)
    • Oxymoron: they will only write code to make the tests pass
    • Axiom: they won’t write code that makes the tests fail
  4. No Signed-Off Requirements (.. seem to change with the weather)
    • Oxymoron: I can’t possibly certify that the requirements have been met
    • Axiom: Each requirement is defined by its acceptance tests: these are negotiated with and confirmed by the customer
  5. No Test Plan (IEEE 829 Test Plans are in short supply on agile projects)
    • Oxymoron: Failing to plan is planning to fail! How will we know what to test?
    • Axiom: The Test Plan is impled in the product backlog: everything we build, we test (don’t build tests you don’t care if they fail!)
  6. No Test Manager (or Test Management Tool) (Test Manager is not a role that exists in Agile literature)
    • Oxymoron: so how can we possibly manage testing?
    • Axiom: Testers == team; tests == specifications; … they don’t need separate management

After that he offered some recommendations for quality principles, but there have not been bigger surprises hidden (fast feedback, shared “done” understanding, etc). The right testing attitude was exposed for example in these quotes:

“Defects are evidence of missing tests”

“Any test worth writing is worth running all the time”

Means, when a test is “red” and the customer does not care … delete the Test. It was not worth writing in the first place, since apparently it was testing not needed or wanted behaviour.

Quality and Short Release Cycle von Lior Friedman

Visited and written by: Thomas

Lior starts his presentation with the statement: It is not the task of a developer to just create code, but functional and quality software. I can only say “yes”, but why is it so that this must be emphasized so often? It seems crystal clear to me that developers want to create value for the customer and it not programming for the sake of programming (this can be done in some spare time ;-)). For me this is all about a professional approach to software development.

It continues with a discussion about Time versus Quality and the fact that in traditional projects there must be always compromises if the amount of Resourcen is fixed. With agile methods time should be saved by producing better quality at the same time.

I think for the rest I can keep it short as there was not really too much new. The presentation discusses the advantages of short release cylcles, better quality due to CI, less bugs, better efficiency, more trust from the customer side and less risk. To sum it up: The reasons why we are doing agile software development anyway.

“Testify” – One-button Test-Driven Development tooling & setup by Mike Scott

Visited and written by: Andreas

In the beginning there was some overlap with the earlier presentation from the SQS collegue David Evans. But quickly the live demonstration of “Testify” started, a tool to create a fully configured, with all bells and whistles, TDD/ATDD (Acceptance TDD) project setup; either in .Net or Java. This SQS-internal tool is open source from today on. Thanks Scott!

  • .Net integrated tools: nant, nunit, fit, fitnesse, richnesse, selenium, ncover, nsis, fx-cop, cruisecontrol
  • Java: here the toolsbelt looked similar, but I was too slow to note it down

Essentially, Testify is a carefully crafted project, in which some files are turned into velocity templates to replace the project name (the only parameter the testify wizard accepts) in some files and filenames. Personally I would have chosen Maven Archetypes to do that, because technically it is doing exactly that (and a little bit more), but ok, you don’t have a UI for that. If you don’t have any tool preference or other constraints, you should invest 2 hours into the generated project and see if it fit your needs.

Test Driven TDD – TDD quality improvement based on test design technique and other testing theory von Wonil Kwon

Visited and written by: Thomas

Sorry, but here a summary does not really make muh sense as the whole presentation was really low-level and there was nothing really new. Furthermore the examples did not really make much sense. Sorry Wonil.

Key Note Tom Gilb – Agile Inspections

Visited by: Andreas & Thomas, Written by: Thomas

Yiihaaa, it is 6pm and Tom start his keynote. So far I never had the opportunity to see Tom live and half an hour later it seems as if this is not really what some visitors have been expecting as it seems that agile is no topic at all in his presentation. More to the opposite as at the end of this talk he is firing with big canons at agile saying that “yeah it’s nice you guys do your standup meetings and have fun and everything but no one needs those in real projects. There you need measurable results!”

Tom Gilb: “It is you’re fault if you accept garbage in!” (about testers accepting bad requirement documents)

And that is the key message of Tom’s talk: How can I find the amount of potential defects in requirement specifications and minimize them? Because for two defects in the requirement specification there will be one bug later on in the software. At this point in time Andreas was sitting next to me and was already a little bit agonised as he was missing the agile background and at the same time he was a bit flashed by the slides. But I have to say: I really liked it and therefore I wanted to write a bit more on it still :).

Tom Gilb: “Garbage in guarantees some garbage out no matter how clever you are.”

Ok, so requirements are the topic. No “Executable Specificqations”, no fancy automated tests but a way to find potential defects in requirement documents. I think it is quite obvious that Tom wants to be – and succeeded to be – very provocative with his statements, especially at a conference focusing on “Agile Testing”. But I am a big fan of looking at things from different angles and I think Toms’ approach can very well be seen as something that can be used in addition and it does not really contradict with the agile testing practices – used anyway at a later stage in the project – discussed so far.

Tom Gilb: “How lucky do you feel today, punk?” (About the 50:50 chance that a bug in the requirements results in a bug in the final software.)

The discussed method is analysing specification documents with respect to the following three rules:

  • Clear enough to test is.
  • Unambigious with respect to intendend readers.
  • Contains no design.

With this approach you can now analyse some pages of a given requirement specification and get “the amount of defects per page” as a measurable size. For this it is enough to have a group of stakeholders analyse the document for about 10 minutes. I skip some of the details here, but in the end you get some number indicating the amount of defects which gets multiplied with some Tom-Experience one page of requirement specifications containing roughly 300 words in average. (Tom had some even more extreme examples with up to 600 defect in a page containing only 300 words.) This seems to be a normal value when looking at specifications that have not been written with any special focus on the rules shown beforehand. With 60-100 defects on one page this would mean already 600-1000 defects on a specification with only 10 pages. With 50% of them ending up as real bugs in your system that must be found and fixed this shows how dangerous this is for projects … and how expensive.

I really think this is a very good approach to check requirement documents (also together with a customer) and is worth trying out on ourselves. Of course Tom also has some solution for this problem by describing requirements in a more formalized way. For more information just use google to find Tom’s book. Of course there will be also some other ways of solving this once the problem is identified. In the end this means that with this we will get excellent input for our “executable specifications” and then I think it is also ok for Tom that we have some fun using our agile methods and everything … as deep within him he can for sure as well see some value in this ;-).

Chill out/Dinner Buffet – Oktoberfest

Visited and written by: Andreas & Thomas

The evening event was done using the Oktoberfest as a theme. Well, we are both not the biggest fans of this and the loud music was making it very hard to continue any meaningful discussions. But anyway it was fun and was of course a nice idea.

Later the evening we are moving to the hotel bar to find still some people for some discussions and to finish this “monster baby” of a blog entry. This is of courese worth a quick agile retrospective and we decide to use a different approach tomorrow by publishing smaller blog posts more frequently. This will minimize the risk to produce something that in the end no one needs as it might not get ready in time for the end of the conference :).

Long-term experience in agile software projects using
Java enterprise technologies. Interested in test automation tools and concepts.


Your email address will not be published.