Difference between revisions of "DevTestCIminiconf"

From LCA2015 Delegate wiki
Jump to: navigation, search
Line 16: Line 16:
 
 
 
* 10:40 Nick Coghlan - [[#Beaker's Hardware Inventory System|Beaker's Hardware Inventory System]]
 
* 10:40 Nick Coghlan - [[#Beaker's Hardware Inventory System|Beaker's Hardware Inventory System]]
* 11:10 Steve Kowalik - Testing the cloud in the cloud
+
* 11:10 Steve Kowalik - [[#Testing the cloud in the cloud|Testing the cloud in the cloud]]
* 11:35 Fraser Tweedale - The Best Test Data is Random Test Data
+
* 11:35 Fraser Tweedale - [[#The Best Test Data is Random Test Data|The Best Test Data is Random Test Data]]
* 12:00 Sarah Kowalik & Jesse Reynolds - Developers, sysadmins, and everyone else: Why you should be using Serverspec
+
* 12:00 Sarah Kowalik & Jesse Reynolds - [[#Developers, sysadmins, and everyone else: Why you should be using Serverspec|Developers, sysadmins, and everyone else: Why you should be using Serverspec]]
* 13:20 Matthew Treinish - Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System
+
* 13:20 Matthew Treinish - [[#Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System|Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System]]
 
* 13:45 Raghavendra Prabhu - Corpus collapsum:  Partition tolerance of Galera put to test
 
* 13:45 Raghavendra Prabhu - Corpus collapsum:  Partition tolerance of Galera put to test
 
* 14:15 Sven Dowideit - Kickstart new developers using Docker
 
* 14:15 Sven Dowideit - Kickstart new developers using Docker
Line 28: Line 28:
  
  
== Beaker's Hardware Inventory System ==
+
=== Beaker's Hardware Inventory System ===
  
 
By Nick Coghlan
 
By Nick Coghlan
Line 43: Line 43:
 
Beaker including the transition from using ""smolt"" to ""lshw"" for this
 
Beaker including the transition from using ""smolt"" to ""lshw"" for this
 
component and how the Beaker team is able to use access to more esoteric hardware to enhance the capabilities of lshw.
 
component and how the Beaker team is able to use access to more esoteric hardware to enhance the capabilities of lshw.
 +
 +
 +
=== Testing the cloud in the cloud ===
 +
 +
by Steve Kowalik
 +
 +
OpenStack makes heavy use of CI, spinning up instances to run tests, and then destroying them. This gets much harder when what you're trying to test on the cloud is if your cloud can deploy a cloud. In this talk I'll talk about what we're currently doing with CI in TripleO (OpenStack on OpenStack, that is, using OpenStack to deploy OpenStack) and what our future plans with the CI is.
 +
 +
 +
=== The Best Test Data is Random Test Data  ===
 +
 +
by Fraser Tweedale
 +
 +
Testing accounts for a large portion of the cost of software
 +
development.  Tools to automate testing allow for more thorough
 +
testing in less time.  *Property-based testing* provides ways to
 +
define expected properties of functions under test, and mechanisms
 +
to automatically check whether those properties hold in a large
 +
number of cases - or whether a property can be falsified.
 +
 +
This talk will establish the motivations and explore the mechanisms
 +
of property-based testing.  Concepts will be demonstrated primarily
 +
using Haskell's *QuickCheck* library.  We will also review property-based testing solutions for other popular languages.
 +
 +
The talk will conclude with a discussion of the limitations of
 +
property-based testing, and alternative approaches.
 +
 +
=== Developers, sysadmins, and everyone else: Why you should be using Serverspec ===
 +
 +
by Sarah Kowalik & Jesse Reynolds
 +
 +
"Congratulations! You’ve shipped a package to your users! Your responsibilities as a maintainer are now complete! " – nobody, ever.
 +
 +
Anyone who has shipped software knows that a release is just the beginning of another wave of pain on the beach of software maintainership. Does the package actually install? Does the package contain what’s intended? Does your software run as expected? These are all questions that most projects can’t answer easily at release time without significant manual testing.
 +
 +
Wouldn’t it be great if you could simulate the steps users will go through to get your software installed, and verify you meet their expectations?
 +
 +
Enter Serverspec, a simple test harness for behaviour testing running systems. Serverspec helps you do outside-in testing of your software across multiple platforms, by providing simple shortcuts for common tests like “is this service running with these arguments?”, “does the application boot with this line in the configuration file?”, or “have I opened a back door to attackers by creating a world writeable log file?”.
 +
 +
In this talk we’ll explore how to write and run automated Serverspec tests for your packages at release time, how to maintain and improve quality across multiple target install platforms, and ways you can save time when testing for regressions, behaviour, and the user install and configuration experience.
 +
 +
 +
=== Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System  ===
 +
 +
by Matthew Treinish
 +
 +
The OpenStack project's CI system operates at a scale that is too large to feasibly track all the test results or to monitor the system health at any given time by hand. To automate this there are several programmatic resources available to track the results from the system, including tools like logstash and graphite. Using and building off these tools have been invaluable especially as the OpenStack project continues to grow. However, the granularity of data available was still mostly limited to the individual test job, which limited the type of analysis which could be done.
 +
 +
For some time there was a desire to track the performance of individual tests over a longer period of time to provide data for changes to testing policy and also to watch for performance regressions. However, there was no mechanism available to aggregate all the necessary data, nor a programmatic interface to automate extracting useful trends from it. To fill this gap subunit2sql was created. Subunit2sql takes the subunit output from the test runs, parses it, and stores the results in a SQL database. Having the result data easily available in a database at the granularity of individual tests has enabled the OpenStack project to better track the project's development and quality over time.
 +
 +
This talk will cover a basic outline of the subunit2sql architecture and data model, how it's deployed and used in the OpenStack CI environment, and provide an overview of the results and advantages that the OpenStack project has seen from using a subunit2sql DB to track test results.

Revision as of 12:05, 6 January 2015

Developer, Testing, Release and Continuous Integration Automation Miniconf 2015

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

  • code review tools and techniques (e.g. gerrit)
  • continuous integration tools (e.g. jenkins)
  • CI techniques (e.g. gated trunk, zuul)
  • testing tools and techniques (e.g. subunit, fuzz testing tools)
  • release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
  • applying CI in your workplace/project


Schedule


Beaker's Hardware Inventory System

By Nick Coghlan

Ever wondered what it might take to track down and resolve a kernel bug that only affects one particular variant of one particular processor architecture from one particular vendor? Step one is going to be actually getting hold of a suitable machine, and for that, you'll need an inventory system that provides that level of detail.

Red Hat's Beaker integration and testing system provides such a service. This talk will describe the inventory gathering component of Beaker including the transition from using ""smolt"" to ""lshw"" for this component and how the Beaker team is able to use access to more esoteric hardware to enhance the capabilities of lshw.


Testing the cloud in the cloud

by Steve Kowalik

OpenStack makes heavy use of CI, spinning up instances to run tests, and then destroying them. This gets much harder when what you're trying to test on the cloud is if your cloud can deploy a cloud. In this talk I'll talk about what we're currently doing with CI in TripleO (OpenStack on OpenStack, that is, using OpenStack to deploy OpenStack) and what our future plans with the CI is.


The Best Test Data is Random Test Data

by Fraser Tweedale

Testing accounts for a large portion of the cost of software development. Tools to automate testing allow for more thorough testing in less time. *Property-based testing* provides ways to define expected properties of functions under test, and mechanisms to automatically check whether those properties hold in a large number of cases - or whether a property can be falsified.

This talk will establish the motivations and explore the mechanisms of property-based testing. Concepts will be demonstrated primarily using Haskell's *QuickCheck* library. We will also review property-based testing solutions for other popular languages.

The talk will conclude with a discussion of the limitations of property-based testing, and alternative approaches.

Developers, sysadmins, and everyone else: Why you should be using Serverspec

by Sarah Kowalik & Jesse Reynolds

"Congratulations! You’ve shipped a package to your users! Your responsibilities as a maintainer are now complete! " – nobody, ever.

Anyone who has shipped software knows that a release is just the beginning of another wave of pain on the beach of software maintainership. Does the package actually install? Does the package contain what’s intended? Does your software run as expected? These are all questions that most projects can’t answer easily at release time without significant manual testing.

Wouldn’t it be great if you could simulate the steps users will go through to get your software installed, and verify you meet their expectations?

Enter Serverspec, a simple test harness for behaviour testing running systems. Serverspec helps you do outside-in testing of your software across multiple platforms, by providing simple shortcuts for common tests like “is this service running with these arguments?”, “does the application boot with this line in the configuration file?”, or “have I opened a back door to attackers by creating a world writeable log file?”.

In this talk we’ll explore how to write and run automated Serverspec tests for your packages at release time, how to maintain and improve quality across multiple target install platforms, and ways you can save time when testing for regressions, behaviour, and the user install and configuration experience.


Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System

by Matthew Treinish

The OpenStack project's CI system operates at a scale that is too large to feasibly track all the test results or to monitor the system health at any given time by hand. To automate this there are several programmatic resources available to track the results from the system, including tools like logstash and graphite. Using and building off these tools have been invaluable especially as the OpenStack project continues to grow. However, the granularity of data available was still mostly limited to the individual test job, which limited the type of analysis which could be done.

For some time there was a desire to track the performance of individual tests over a longer period of time to provide data for changes to testing policy and also to watch for performance regressions. However, there was no mechanism available to aggregate all the necessary data, nor a programmatic interface to automate extracting useful trends from it. To fill this gap subunit2sql was created. Subunit2sql takes the subunit output from the test runs, parses it, and stores the results in a SQL database. Having the result data easily available in a database at the granularity of individual tests has enabled the OpenStack project to better track the project's development and quality over time.

This talk will cover a basic outline of the subunit2sql architecture and data model, how it's deployed and used in the OpenStack CI environment, and provide an overview of the results and advantages that the OpenStack project has seen from using a subunit2sql DB to track test results.