Difference between pages "DevTestCIminiconf" and "Delegate Information"

From LCA2015 Delegate wiki
(Difference between pages)
Jump to: navigation, search
 
(Add section for Recreation, Fitness + Fun. Will place info in pages)
 
Line 1: Line 1:
== Developer, Testing, Release and Continuous Integration Automation Miniconf 2015 ==
+
{{Trail}}
 +
== Accommodation ==
 +
: <small>''Main article: [[Accommodation]]''</small>
 +
<section begin=accom />* [[Campus accommodation]] - Accommodation options for conference attendees<section end=accom />
  
This miniconf is all about improving the way we produce, collaborate, test and release software.
+
== Childcare ==
 +
: <small>''Main article: [[Childcare]]''</small>
 +
<section begin=child /><section end=child />
  
We want to cover tools and techniques to improve the way we work together to produce higher quality software:
+
== Eating and Drinking ==
 +
: <small>''Main article: [[Eating and Drinking]]''</small>
 +
<section begin=food />* [[Food options for restricted diets]]<section end=food />
  
* code review tools and techniques (e.g. gerrit)
+
== Shops and Supplies ==
* continuous integration tools (e.g. jenkins)
+
: <small>''Main article: [[Shops and Supplies]]''</small>
* CI techniques (e.g. gated trunk, zuul)
+
<section begin=shops /><section end=shops />
* testing tools and techniques (e.g. subunit, fuzz testing tools)
+
* release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
+
* applying CI in your workplace/project
+
  
 +
== Recreation, Fitness and Fun Activities ==
 +
* [[Recreation and Fitness page]] - Information for people who run, jog, walk, swim. Swimming pool + gym fees for casual users.
 +
* [[Fun activities]]
  
== Schedule ==
 
 
* 10:40 Nick Coghlan - [[#Beaker's Hardware Inventory System|Beaker's Hardware Inventory System]]
 
* 11:10 Steve Kowalik - [[#Testing the cloud in the cloud|Testing the cloud in the cloud]]
 
* 11:35 Fraser Tweedale - [[#The Best Test Data is Random Test Data|The Best Test Data is Random Test Data]]
 
* 12:00 Sarah Kowalik & Jesse Reynolds - [[#Developers, sysadmins, and everyone else: Why you should be using Serverspec|Developers, sysadmins, and everyone else: Why you should be using Serverspec]]
 
* 13:20 Matthew Treinish - [[#Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System|Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System]]
 
* 13:45 Raghavendra Prabhu - [[#Corpus collapsum:  Partition tolerance of Galera put to test|Corpus collapsum:  Partition tolerance of Galera put to test]]
 
* 14:15 Sven Dowideit - [[#Kickstart new developers using Docker|Kickstart new developers using Docker]]
 
* 14:35 Joe Gordon - [[#Large Scale Identification of Race Conditions (In OpenStack CI)|Large Scale Identification of Race Conditions (In OpenStack CI)]]
 
* 15:40 Anita Kuno - [[#Gerrit & Gertty: A Daily Habit|Gerrit & Gertty: A Daily Habit]]
 
* 16:10 Dr. Jason Cohen - [[#Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques.|Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques.]]
 
* 16:35 Discussion / Q&A
 
  
 +
== Communication ==
 +
: <small>''Main article: [[Communication]]''</small>
 +
<section begin=comms />* [[Emergency contacts]] - Emergency Services, Hospitals, etc
 +
* [[Internet & Network Access]]
 +
* [[Phone and mobile data]]
 +
* [[:Category:Help_Files|Wiki Help]] - formatting and editing with wikitext<section end=comms />
  
=== Beaker's Hardware Inventory System ===
+
== Medical Care ==
 
+
: <small>''Main article: [[Medical Care]]''</small>
By Nick Coghlan
+
<section begin=medical /><section end=medical />
 
+
Ever wondered what it might take to track down and
+
resolve a kernel bug that only affects one particular variant of one
+
particular processor architecture from one particular vendor? Step one
+
is going to be actually getting hold of a suitable machine, and for
+
that, you'll need an inventory system that provides that level of
+
detail.
+
 
+
Red Hat's Beaker integration and testing system provides such a
+
service. This talk will describe the inventory gathering component of
+
Beaker including the transition from using ""smolt"" to ""lshw"" for this
+
component and how the Beaker team is able to use access to more esoteric hardware to enhance the capabilities of lshw.
+
 
+
 
+
=== Testing the cloud in the cloud ===
+
 
+
by Steve Kowalik
+
 
+
OpenStack makes heavy use of CI, spinning up instances to run tests, and then destroying them. This gets much harder when what you're trying to test on the cloud is if your cloud can deploy a cloud. In this talk I'll talk about what we're currently doing with CI in TripleO (OpenStack on OpenStack, that is, using OpenStack to deploy OpenStack) and what our future plans with the CI is.
+
 
+
 
+
=== The Best Test Data is Random Test Data  ===
+
 
+
by Fraser Tweedale
+
 
+
Testing accounts for a large portion of the cost of software
+
development.  Tools to automate testing allow for more thorough
+
testing in less time.  *Property-based testing* provides ways to
+
define expected properties of functions under test, and mechanisms
+
to automatically check whether those properties hold in a large
+
number of cases - or whether a property can be falsified.
+
 
+
This talk will establish the motivations and explore the mechanisms
+
of property-based testing.  Concepts will be demonstrated primarily
+
using Haskell's *QuickCheck* library.  We will also review property-based testing solutions for other popular languages.
+
 
+
The talk will conclude with a discussion of the limitations of
+
property-based testing, and alternative approaches.
+
 
+
=== Developers, sysadmins, and everyone else: Why you should be using Serverspec ===
+
 
+
by Sarah Kowalik & Jesse Reynolds
+
 
+
"Congratulations! You’ve shipped a package to your users! Your responsibilities as a maintainer are now complete! " – nobody, ever.
+
 
+
Anyone who has shipped software knows that a release is just the beginning of another wave of pain on the beach of software maintainership. Does the package actually install? Does the package contain what’s intended? Does your software run as expected? These are all questions that most projects can’t answer easily at release time without significant manual testing.
+
 
+
Wouldn’t it be great if you could simulate the steps users will go through to get your software installed, and verify you meet their expectations?
+
 
+
Enter Serverspec, a simple test harness for behaviour testing running systems. Serverspec helps you do outside-in testing of your software across multiple platforms, by providing simple shortcuts for common tests like “is this service running with these arguments?”, “does the application boot with this line in the configuration file?”, or “have I opened a back door to attackers by creating a world writeable log file?”.
+
 
+
In this talk we’ll explore how to write and run automated Serverspec tests for your packages at release time, how to maintain and improve quality across multiple target install platforms, and ways you can save time when testing for regressions, behaviour, and the user install and configuration experience.
+
 
+
 
+
=== Subunit2SQL: Tracking Individual Test Results in OpenStack's CI System  ===
+
 
+
by Matthew Treinish
+
 
+
The OpenStack project's CI system operates at a scale that is too large to feasibly track all the test results or to monitor the system health at any given time by hand. To automate this there are several programmatic resources available to track the results from the system, including tools like logstash and graphite. Using and building off these tools have been invaluable especially as the OpenStack project continues to grow. However, the granularity of data available was still mostly limited to the individual test job, which limited the type of analysis which could be done.
+
 
+
For some time there was a desire to track the performance of individual tests over a longer period of time to provide data for changes to testing policy and also to watch for performance regressions. However, there was no mechanism available to aggregate all the necessary data, nor a programmatic interface to automate extracting useful trends from it. To fill this gap subunit2sql was created. Subunit2sql takes the subunit output from the test runs, parses it, and stores the results in a SQL database. Having the result data easily available in a database at the granularity of individual tests has enabled the OpenStack project to better track the project's development and quality over time.
+
 
+
This talk will cover a basic outline of the subunit2sql architecture and data model, how it's deployed and used in the OpenStack CI environment, and provide an overview of the results and advantages that the OpenStack project has seen from using a subunit2sql DB to track test results.
+
 
+
=== Corpus collapsum: Partition tolerance of Galera put to test ===
+
 
+
by Raghavendra Prabhu
+
 
+
All real world networks do tend to fail every now and then.  Failures are common, however, resilience of distributed systems to partitions is not nearly ubiquitous enough. Even though partition tolerance is an integral part of Brewer's CAP theorem, distributed systems, even the ones slated to fulfill the 'P' in CAP, fail to meet it to desired levels or deterministic enough. This, unfortunately, is not an exception, but has become more commonplace as the recent research shows [1].
+
 
+
Galera [2] adds synchronous replication to MySQL/PXC (Percona XtraDB Cluster)[3] through wsrep replication API. Synchronous replication, requires not just a quorum but a consensus. In a noisy environment, a delayed consensus can delay a commit, thus add significant latency or even partition the network into multiple non-primary components and single primary component (if plausible). Hence, building resilience is not an expectation but a fundamental requirement.
+
 
+
This talk is about testing partition immunity of Galera. Docker and Netem/tc (traffic control) are used prominently here. Netem is important to simulate real-world failure events - packet loss, delay, corruption, duplication, reordering et.al., and to model real-world failure distributions like pareto, paretonormal, uniform etc. Docker, or containers in general, are essential to simulate multiple nodes which can be built at runtime, brought up easily, tore down, their networks and flows altered elegantly when and then required, quick horizontal scaling; performance is also kept in mind when choosing containers over full virtualization and others.  Sysbench OLTP is used for load generation, though, RQG (random query generator) can also be used here for advanced fuzz testing.
+
 
+
Salient observations discussed will be:
+
 
+
* Application of WAN segment-aware loss coefficents to virtual network interfaces.
+
* Varying reconciliation periods after network noise is withdrawn.
+
* Multi-node loss and short-lived noise burst visa-vis single-node loss and longer noise envelope.
+
* Full-duplex linking of containers with dnsmasq.
+
* Effects of non-network actors like slow/fast disks on fsync.
+
* Round-robin request distribution to nodes with/without the nodes with network failures in chain.
+
* Pre and post-testing sanity tests.
+
* Log collection and analysis.
+
* Horizontal scaling of test nodes and issues with Docker/namespace.
+
 
+
To conclude,  all the ins-and-outs of partition tolerance testing with Docker and Netem for Galera will be discussed. Other similar tools/frameworks like jespen [4] will also be discussed. Comparison of Galera/EVS (extended virtual synchrony) to other consensus protocols like Paxos, Raft etc. will also be made. Results of testing - addition of auto_evict to Galera - will also be highlighted at the end.
+
 
+
[1] https://queue.acm.org/detail.cfm?id=2655736
+
[2] http://galeracluster.com/products/
+
[3] http://www.percona.com/software/percona-xtradb-cluster
+
[4] https://github.com/aphyr/jepsen
+
 
+
=== Kickstart new developers using Docker ===
+
 
+
by Sven Dowideit
+
 
+
Docker containers allow new developers to make quick contributions to your project without needing to first learn how to set up an environment.
+
 
+
These new developers can be be sure that the environment they're using to test their changes are the same as those used by everyone else, so asking for help is going to be easy.
+
 
+
Next up, you can use the same Docker built environment to run tests.... continuously - and suddenly, you're building your releases the same totally repeatable way.
+
 
+
=== Large Scale Identification of Race Conditions (In OpenStack CI) ===
+
 
+
by Joe Gordon
+
 
+
Does your project have a CI system that suffers from an ever-growing set of race conditions? We have the tool for you: it has enabled increased velocity despite project growth.
+
 
+
When talking about the GNU HURD kernel, Richard Stallman once said, “it turned out that debugging these asynchronous multithreaded programs was really hard.” With 30+ asynchronous services developed by over 1000 people the OpenStack project is an object lesson of this problem. One of the consequences is race conditions often leak into code with no obvious defect. Just before OpenStack’s most recent stable release we were pushing the boundaries of what was possible with manual tracking of race conditions. To address this problem we have developed an ElasticSearch based toolchain called “elastic-recheck.” This helps us track race conditions so developers can fix them and identify when CI failures are related to the failed patch or are due to a known pre-existing race condition. Automated tracking of over 70 specific race conditions has allowed us to quickly determine which bugs are hurting us the most, allowing us to prioritize debugging efforts. Immediate and automated classification of test failures into genuine and false failures has saved countless hours that would have been wasted digging through the over 350MBs of logs produced by a single test run.
+
 
+
=== Gerrit & Gertty: A Daily Habit ===
+
 
+
by Anita Kuno
+
 
+
Taking the functionality of Gerrit and turning it into a workflow friendly enough to become a regular part of one's workday, is the focus of this talk. Many folks are familiar with Gerrit but don't have the pieces necessary to complete the high workflow output of some Gerrit reviewers. This presentation will share some aspects of using Gerrit and Gertty which high volume reviewers use with the intention of helping those who wish to increase their Gerrit throughput.
+
 
+
=== Incorporating the Security Development Life Cycle and Static Code Analysis into our Everyday Development Lives: An Overview of Theory and Techniques. ===
+
 
+
by Dr. Jason Cohen
+
 
+
It would seem that, despite the exponential growth in security products, security services, security companies, security certifications, and general interest in the security topic; we are still bombarded with a constant parade of security vulnerability disclosures on a seemingly daily basis. It turns out that we in the Open Source community can no longer shake a disapproving finger at the closed-source giants without also pointing to ourselves and asking what we can do better.  Issues like the OpenSSL Heartbleed vulnerability and several Drupal issues over the past year represent a case in point of major security flaws in common Open Source software.  It also demonstrated that, in this era of increasingly modular code development and reuse of common libraries, we need to begin considering the impact of potential flaws in code we assume to be secure due simply to its widespread use and Open Source nature.  So, what do we do?  Although it’s not a magical solution or panacea to the problem; implementing Security Development Life-cycle best practices and principles for each and every software development endeavor we undertake (whether it is for your job or for an Open Source Project) can go a long way to reducing the potential for common security flaws.  In addition, there is no reason that Static Code Analysis should not be part of every development effort.  We are still seeing obvious, easy to fix flaws in modern source code. Input sanitization issues, Cross-Site-Scripting, buffer overflows, and many other known issues still represent the bulk of security issues present. Static Code Analysis can help catch many of these unnoticed issues before code makes it out of the developer’s hands. In addition, we can perform our own analysis on libraries that we wish to leverage to help determine risk ourselves.  In this talk, we will explore some common best practice Security Development Life-cycle theory and how we can integrate this into modern code development schemes such as Continuous Integration and Agile.  We will also look at how to integrate Static Code analysis tools into the development process, to include a demo of HP Fortify with Eclipse and an example analysis of a common Open Source code base.
+

Latest revision as of 13:41, 6 January 2015

< Main Page

Accommodation

Main article: Accommodation

Childcare

Main article: Childcare


Eating and Drinking

Main article: Eating and Drinking

Shops and Supplies

Main article: Shops and Supplies


Recreation, Fitness and Fun Activities


Communication

Main article: Communication

Medical Care

Main article: Medical Care