Difference between pages "DevTestCIminiconf" and "Campus accommodation"

From LCA2015 Delegate wiki
(Difference between pages)
Jump to: navigation, search
 
(Add Alastair D'Silva)
 
Line 1: Line 1:
== Developer, Testing, Release and Continuous Integration Automation Miniconf 2015 ==
+
{{Trail|Delegate Information|Accommodation}}
 +
Registration Information: http://linux.conf.au/register/accommodation
  
This miniconf is all about improving the way we produce, collaborate, test and release software.
+
== Late night arrivals: ==
  
We want to cover tools and techniques to improve the way we work together to produce higher quality software:
+
If you are arriving at your accommodation outside of their regular reception hours, please leave your details here so we can organise out-of-hours reception.
  
* code review tools and techniques (e.g. gerrit)
+
=== Uni Halls ===
* continuous integration tools (e.g. jenkins)
+
'''Reception hours: 8am - 8pm. '''
* CI techniques (e.g. gated trunk, zuul)
+
* testing tools and techniques (e.g. subunit, fuzz testing tools)
+
* release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
+
* applying CI in your workplace/project
+
  
 +
'''After hours phone: 027 676 4862'''
  
== Schedule ==
+
(your name, date & time of arrival)
+
* 10:40 Nick Coghlan - [[#Beaker's Hardware Inventory System|Beaker's Hardware Inventory System]]
+
* 11:10 Steve Kowalik - [[#Testing the cloud in the cloud|Testing the cloud in the cloud]]
+
* 11:35 Fraser Tweedale - [[#The Best Test Data is Random Test Data|The Best Test Data is Random Test Data]]
+
* 12:00 Sarah Kowalik & Jesse Reynolds - [[#Developers, sysadmins, and everyone else: Why you should be using Serverspec|Developers, sysadmins, and everyone else: Why you should be using Serverspec]]
+
* 13:20 Matthew Treinish - [[#Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System|Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System]]
+
* 13:45 Raghavendra Prabhu - [[#Corpus collapsum:  Partition tolerance of Galera put to test|Corpus collapsum:  Partition tolerance of Galera put to test]]
+
* 14:15 Sven Dowideit - [[#Kickstart new developers using Docker|Kickstart new developers using Docker]]
+
* 14:35 Joe Gordon - [[#Large Scale Identification of Race Conditions (In OpenStack CI)|Large Scale Identification of Race Conditions (In OpenStack CI)]]
+
* 15:40 Anita Kuno - [[#Gerrit & Gertty: A Daily Habit|Gerrit & Gertty: A Daily Habit]]
+
* 16:10 Dr. Jason Cohen - [[#Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques.|Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques.]]
+
* 16:35 Discussion / Q&A
+
  
 +
James (Ender) Brown, Fri 09/01, ~10:30PM
  
=== Beaker's Hardware Inventory System ===
+
Jonathan Woithe, Sat 10th, 20:00-2100 (plane arrives 18:55)
  
By Nick Coghlan
+
Himangi Saraogi, Sun 11th, ~16:00 (plane arrives 14:50)
  
Ever wondered what it might take to track down and
+
Douglas Bagnall, Sun 11th, ~21:30 (plane arrives 20:30)
resolve a kernel bug that only affects one particular variant of one
+
particular processor architecture from one particular vendor? Step one
+
is going to be actually getting hold of a suitable machine, and for
+
that, you'll need an inventory system that provides that level of
+
detail.
+
  
Red Hat's Beaker integration and testing system provides such a
+
Alastair D'Silva ([https://twitter.com/evildeece @evildeece] [mailto:alastair@d-silva.org Alastair D'Silva]) Mon 12th, 01:00 (plane arrives 23:35 JQ205 (SYD to AKL))
service. This talk will describe the inventory gathering component of
+
Beaker including the transition from using ""smolt"" to ""lshw"" for this
+
component and how the Beaker team is able to use access to more esoteric hardware to enhance the capabilities of lshw.
+
  
 +
=== Carlaw Park ===
 +
'''Reception hours: 8:30-5pm and 6pm-7pm Monday to Friday, 11am-1pm Saturday and Sunday. '''
  
=== Testing the cloud in the cloud ===
+
'''After hours phone: 027 707 9813'''
  
by Steve Kowalik
+
(your name, date & time of arrival)
  
OpenStack makes heavy use of CI, spinning up instances to run tests, and then destroying them. This gets much harder when what you're trying to test on the cloud is if your cloud can deploy a cloud. In this talk I'll talk about what we're currently doing with CI in TripleO (OpenStack on OpenStack, that is, using OpenStack to deploy OpenStack) and what our future plans with the CI is.
+
* Clinton Roy, Fri 9th, ~midnight (plane arrives 22:45)
 +
* Eyal Lebedinsky, Sat 10th, ~20:30 (plane arrives 18:55)
 +
* [[user:Daniel Bryan|Daniel Bryan]], Sun 11th, ~15:30 (plane arrives 14:00)
 +
* Hamish Coleman, Sun 11th, ~14:55 (plane arrives 12:55)
 +
* Ewen McNeill, Sun 11th, ~15:30 (plane arrives 14:15)
 +
* Mark Ellem, Sun 11th, ~16:30 (plane arrives 15:10)
 +
* Mark Jessop, Sun 11th, ~19:00 (plane arrives 17:15, then customs + travel)
 +
* Chris Edsall, Sun 11th, ~20:00 (plane arrives 19:00)
 +
* Peter Vesely, Sun 11th, ~20:30 (plane arrives 18:55), early checkout on Sat 17th ~8:00am
 +
* Paul Warren, Sun 11th, ~22:30, plane arrives 20:50.
 +
* [mailto:mike.carden@gmail.com Mike Carden], Sun 11th, ~16:30, (plane arrives 14:55)
  
 +
== Clothes Washing ==
  
=== The Best Test Data is Random Test Data  ===
+
University Hall: Yes
  
by Fraser Tweedale
+
"A large coin operated laundry is located on the basement level, equipped with plenty of washers and dryers. (Washing powder is not supplied, but can be purchased via the laundry vending machine or at nearby convenience stores).
 +
University Hall Apartments also have laundry and lounge facilities."
 +
Source: Facilities section of http://www.accommodation.auckland.ac.nz/en/ac-visitors/ac-summer-2/ac-visiting-students.html#59385b0623e91baf6d145a6244e4ac5b
  
Testing accounts for a large portion of the cost of software
 
development.  Tools to automate testing allow for more thorough
 
testing in less time.  *Property-based testing* provides ways to
 
define expected properties of functions under test, and mechanisms
 
to automatically check whether those properties hold in a large
 
number of cases - or whether a property can be falsified.
 
  
This talk will establish the motivations and explore the mechanisms
+
Carlaw Park: Communal coin operated laundries are available in room 833.
of property-based testing.  Concepts will be demonstrated primarily
+
using Haskell's *QuickCheck* library.  We will also review property-based testing solutions for other popular languages.
+
  
The talk will conclude with a discussion of the limitations of
+
Non-campus laundromat: http://www.bubbleslaundromat.co.nz/
property-based testing, and alternative approaches.
+
  
=== Developers, sysadmins, and everyone else: Why you should be using Serverspec ===
+
137 Hobson Street
 
+
Auckland Central
by Sarah Kowalik & Jesse Reynolds
+
Open 6am to Midnight.
 
+
Self-service laundromat.
"Congratulations! You’ve shipped a package to your users! Your responsibilities as a maintainer are now complete! " – nobody, ever.  
+
Large washing machine load - $6
 
+
Commercial extra large washing machine load - $10
Anyone who has shipped software knows that a release is just the beginning of another wave of pain on the beach of software maintainership. Does the package actually install? Does the package contain what’s intended? Does your software run as expected? These are all questions that most projects can’t answer easily at release time without significant manual testing.
+
Commercial dryers $2 per 10mins.
 
+
Normal household load, Wash & Dry - $10 to $12 total.
Wouldn’t it be great if you could simulate the steps users will go through to get your software installed, and verify you meet their expectations?
+
Note changer onsite to change notes into $2 coins for use in all machines.
 
+
Soap powder dispenser available.
Enter Serverspec, a simple test harness for behaviour testing running systems. Serverspec helps you do outside-in testing of your software across multiple platforms, by providing simple shortcuts for common tests like “is this service running with these arguments?”, “does the application boot with this line in the configuration file?”, or “have I opened a back door to attackers by creating a world writeable log file?”.  
+
Parking out front.  
 
+
Open 6am to Midnight every day of the year!
In this talk we’ll explore how to write and run automated Serverspec tests for your packages at release time, how to maintain and improve quality across multiple target install platforms, and ways you can save time when testing for regressions, behaviour, and the user install and configuration experience.
+
 
+
 
+
=== Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System  ===
+
 
+
by Matthew Treinish
+
 
+
The OpenStack project's CI system operates at a scale that is too large to feasibly track all the test results or to monitor the system health at any given time by hand. To automate this there are several programmatic resources available to track the results from the system, including tools like logstash and graphite. Using and building off these tools have been invaluable especially as the OpenStack project continues to grow. However, the granularity of data available was still mostly limited to the individual test job, which limited the type of analysis which could be done.
+
 
+
For some time there was a desire to track the performance of individual tests over a longer period of time to provide data for changes to testing policy and also to watch for performance regressions. However, there was no mechanism available to aggregate all the necessary data, nor a programmatic interface to automate extracting useful trends from it. To fill this gap subunit2sql was created. Subunit2sql takes the subunit output from the test runs, parses it, and stores the results in a SQL database. Having the result data easily available in a database at the granularity of individual tests has enabled the OpenStack project to better track the project's development and quality over time.
+
 
+
This talk will cover a basic outline of the subunit2sql architecture and data model, how it's deployed and used in the OpenStack CI environment, and provide an overview of the results and advantages that the OpenStack project has seen from using a subunit2sql DB to track test results.
+
 
+
=== Corpus collapsum:  Partition tolerance of Galera put to test ===
+
 
+
by Raghavendra Prabhu
+
 
+
All real world networks do tend to fail every now and then.  Failures are common, however, resilience of distributed systems to partitions is not nearly ubiquitous enough. Even though partition tolerance is an integral part of Brewer's CAP theorem, distributed systems, even the ones slated to fulfill the 'P' in CAP, fail to meet it to desired levels or deterministic enough. This, unfortunately, is not an exception, but has become more commonplace as the recent research shows [1].
+
 
+
Galera [2] adds synchronous replication to MySQL/PXC (Percona XtraDB Cluster)[3] through wsrep replication API. Synchronous replication, requires not just a quorum but a consensus. In a noisy environment, a delayed consensus can delay a commit, thus add significant latency or even partition the network into multiple non-primary components and single primary component (if plausible). Hence, building resilience is not an expectation but a fundamental requirement.
+
 
+
This talk is about testing partition immunity of Galera. Docker and Netem/tc (traffic control) are used prominently here. Netem is important to simulate real-world failure events - packet loss, delay, corruption, duplication, reordering et.al., and to model real-world failure distributions like pareto, paretonormal, uniform etc. Docker, or containers in general, are essential to simulate multiple nodes which can be built at runtime, brought up easily, tore down, their networks and flows altered elegantly when and then required, quick horizontal scaling; performance is also kept in mind when choosing containers over full virtualization and others.  Sysbench OLTP is used for load generation, though, RQG (random query generator) can also be used here for advanced fuzz testing.
+
 
+
Salient observations discussed will be:
+
 
+
* Application of WAN segment-aware loss coefficents to virtual network interfaces.
+
* Varying reconciliation periods after network noise is withdrawn.
+
* Multi-node loss and short-lived noise burst visa-vis single-node loss and longer noise envelope.
+
* Full-duplex linking of containers with dnsmasq.
+
* Effects of non-network actors like slow/fast disks on fsync.
+
* Round-robin request distribution to nodes with/without the nodes with network failures in chain.
+
* Pre and post-testing sanity tests.
+
* Log collection and analysis.
+
* Horizontal scaling of test nodes and issues with Docker/namespace.
+
 
+
To conclude,  all the ins-and-outs of partition tolerance testing with Docker and Netem for Galera will be discussed. Other similar tools/frameworks like jespen [4] will also be discussed. Comparison of Galera/EVS (extended virtual synchrony) to other consensus protocols like Paxos, Raft etc. will also be made. Results of testing - addition of auto_evict to Galera - will also be highlighted at the end.
+
 
+
[1] https://queue.acm.org/detail.cfm?id=2655736
+
[2] http://galeracluster.com/products/
+
[3] http://www.percona.com/software/percona-xtradb-cluster
+
[4] https://github.com/aphyr/jepsen
+
 
+
=== Kickstart new developers using Docker ===
+
 
+
by Sven Dowideit
+
 
+
Docker containers allow new developers to make quick contributions to your project without needing to first learn how to set up an environment.
+
 
+
These new developers can be be sure that the environment they're using to test their changes are the same as those used by everyone else, so asking for help is going to be easy.
+
 
+
Next up, you can use the same Docker built environment to run tests.... continuously - and suddenly, you're building your releases the same totally repeatable way.
+
 
+
=== Large Scale Identification of Race Conditions (In OpenStack CI) ===
+
 
+
by Joe Gordon
+
 
+
Does your project have a CI system that suffers from an ever-growing set of race conditions? We have the tool for you: it has enabled increased velocity despite project growth.
+
 
+
When talking about the GNU HURD kernel, Richard Stallman once said, “it turned out that debugging these asynchronous multithreaded programs was really hard.” With 30+ asynchronous services developed by over 1000 people the OpenStack project is an object lesson of this problem. One of the consequences is race conditions often leak into code with no obvious defect. Just before OpenStack’s most recent stable release we were pushing the boundaries of what was possible with manual tracking of race conditions. To address this problem we have developed an ElasticSearch based toolchain called “elastic-recheck.” This helps us track race conditions so developers can fix them and identify when CI failures are related to the failed patch or are due to a known pre-existing race condition. Automated tracking of over 70 specific race conditions has allowed us to quickly determine which bugs are hurting us the most, allowing us to prioritize debugging efforts. Immediate and automated classification of test failures into genuine and false failures has saved countless hours that would have been wasted digging through the over 350MBs of logs produced by a single test run.
+
 
+
=== Gerrit & Gertty: A Daily Habit ===
+
 
+
by Anita Kuno
+
 
+
Taking the functionality of Gerrit and turning it into a workflow friendly enough to become a regular part of one's workday, is the focus of this talk. Many folks are familiar with Gerrit but don't have the pieces necessary to complete the high workflow output of some Gerrit reviewers. This presentation will share some aspects of using Gerrit and Gertty which high volume reviewers use with the intention of helping those who wish to increase their Gerrit throughput.
+
 
+
=== Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques. ===
+
 
+
by Dr. Jason Cohen
+
 
+
It would seem that, despite the exponential growth in security products, security services, security companies, security certifications, and general interest in the security topic; we are still bombarded with a constant parade of security vulnerability disclosures on a seemingly daily basis. It turns out that we in the Open Source community can no longer shake a disapproving finger at the closed-source giants without also pointing to ourselves and asking what we can do better.  Issues like the OpenSSL Heartbleed vulnerability and several Drupal issues over the past year represent a case in point of major security flaws in common Open Source software.  It also demonstrated that, in this era of increasingly modular code development and reuse of common libraries, we need to begin considering the impact of potential flaws in code we assume to be secure due simply to its widespread use and Open Source nature.  So, what do we do?  Although it’s not a magical solution or panacea to the problem; implementing Security Development Life-cycle best practices and principles for each and every software development endeavor we undertake (whether it is for your job or for an Open Source Project) can go a long way to reducing the potential for common security flaws.  In addition, there is no reason that Static Code Analysis should not be part of every development effort.  We are still seeing obvious, easy to fix flaws in modern source code. Input sanitization issues, Cross-Site-Scripting, buffer overflows, and many other known issues still represent the bulk of security issues present. Static Code Analysis can help catch many of these unnoticed issues before code makes it out of the developer’s hands. In addition, we can perform our own analysis on libraries that we wish to leverage to help determine risk ourselves.  In this talk, we will explore some common best practice Security Development Life-cycle theory and how we can integrate this into modern code development schemes such as Continuous Integration and Agile.  We will also look at how to integrate Static Code analysis tools into the development process, to include a demo of HP Fortify with Eclipse and an example analysis of a common Open Source code base.
+

Revision as of 22:30, 6 January 2015

< Main Page < Delegate Information < Accommodation

Registration Information: http://linux.conf.au/register/accommodation

Late night arrivals:

If you are arriving at your accommodation outside of their regular reception hours, please leave your details here so we can organise out-of-hours reception.

Uni Halls

Reception hours: 8am - 8pm.

After hours phone: 027 676 4862

(your name, date & time of arrival)

James (Ender) Brown, Fri 09/01, ~10:30PM

Jonathan Woithe, Sat 10th, 20:00-2100 (plane arrives 18:55)

Himangi Saraogi, Sun 11th, ~16:00 (plane arrives 14:50)

Douglas Bagnall, Sun 11th, ~21:30 (plane arrives 20:30)

Alastair D'Silva (@evildeece Alastair D'Silva) Mon 12th, 01:00 (plane arrives 23:35 JQ205 (SYD to AKL))

Carlaw Park

Reception hours: 8:30-5pm and 6pm-7pm Monday to Friday, 11am-1pm Saturday and Sunday.

After hours phone: 027 707 9813

(your name, date & time of arrival)

  • Clinton Roy, Fri 9th, ~midnight (plane arrives 22:45)
  • Eyal Lebedinsky, Sat 10th, ~20:30 (plane arrives 18:55)
  • Daniel Bryan, Sun 11th, ~15:30 (plane arrives 14:00)
  • Hamish Coleman, Sun 11th, ~14:55 (plane arrives 12:55)
  • Ewen McNeill, Sun 11th, ~15:30 (plane arrives 14:15)
  • Mark Ellem, Sun 11th, ~16:30 (plane arrives 15:10)
  • Mark Jessop, Sun 11th, ~19:00 (plane arrives 17:15, then customs + travel)
  • Chris Edsall, Sun 11th, ~20:00 (plane arrives 19:00)
  • Peter Vesely, Sun 11th, ~20:30 (plane arrives 18:55), early checkout on Sat 17th ~8:00am
  • Paul Warren, Sun 11th, ~22:30, plane arrives 20:50.
  • Mike Carden, Sun 11th, ~16:30, (plane arrives 14:55)

Clothes Washing

University Hall: Yes

"A large coin operated laundry is located on the basement level, equipped with plenty of washers and dryers. (Washing powder is not supplied, but can be purchased via the laundry vending machine or at nearby convenience stores). University Hall Apartments also have laundry and lounge facilities." Source: Facilities section of http://www.accommodation.auckland.ac.nz/en/ac-visitors/ac-summer-2/ac-visiting-students.html#59385b0623e91baf6d145a6244e4ac5b


Carlaw Park: Communal coin operated laundries are available in room 833.

Non-campus laundromat: http://www.bubbleslaundromat.co.nz/

137 Hobson Street Auckland Central Open 6am to Midnight. Self-service laundromat. Large washing machine load - $6 Commercial extra large washing machine load - $10 Commercial dryers $2 per 10mins. Normal household load, Wash & Dry - $10 to $12 total. Note changer onsite to change notes into $2 coins for use in all machines. Soap powder dispenser available. Parking out front. Open 6am to Midnight every day of the year!