Skip to content

Virtual testing meeting on November 19, 2014

phiros edited this page Nov 22, 2014 · 2 revisions

#Attendees

  • Hauke Petersen (FU Berlin)
  • Oleg Hahm (INRIA)
  • Ludwig Ortmann (FU Berlin)
  • Thomas Eichinger (FU Berlin)
  • Kaspar Schleiser
  • Teemu Hakala (ELL-I coop)
  • Asif Sardar (ELL-I coop)
  • Gaetan Harter (INRIA?)

#Minutes

  • The agenda was as follows:
    • (Distributed) hardware based testing including ELL-I's testing approach
    • CI-Systems in general
    • Code analyzer
    • Simulations and virtualization as part of the testing system
  • An additional topic discussed was:
    • Documentation and use cases

(Distributed) hardware based testing including ELL-I's testing approach

  • Philipp presented a concept for distributed hardware based testing for RIOT.
    • Architecturally this systems consists of a master CI server and multiple slave CI systems. Boards can be attached to the slave systems. The master server causes the slave systems to run tests on attached boards. The slaves then report back to the master server with their respective test results.
    • The architecture is distributed due to the fact that we cannot possible provide a test setup for every board we support in RIOT. This way organizations which want to support us (or want to help us in supporting their product / board) can do so by just connecting one or more boards to a
      internet-connected linux system which runs our testing software.
  • In order to reduce the load on the CI slaves Kasper proposed that we start certain hardware tests only if requested by developers by using special commands in github PR comments. Something like:
CI-SYSTEM: test on msba2, telosb, avsextrem.
  • Asif and Teemu presented the testing system currently in use in ELL-I. The sketchboard Asif created provides a good overview of the system:

  • The architecture of our proposed testing system could look as follows (graphic created by Philipp after the meeting; you might have to zoom in a little bit):

CI-Systems in general

  • We agreed that our test system should follow a layered approach. The CI system should just call a script which can also be executed by a developer. This means that the actual test scripts operate independently of the CI system. This has the additional benefit that we can swap out our CI system without needing to rewrite large chunks of our testing scripts.
  • Philipp proposed that we should use buildbot as it more customizable than jenkins (also it is written in python and can thus be easily interfaced from python). Also: Kasper mentioned that he already created a buildbot configuration for RIOT.

Code analyzer

  • Kasper proposed that we integrate codespeed into our test system
  • Philipp talked about the static code analyzer scan-build which comes with clang.
  • Hauke and Ludwig discussed whether we should use non-free code analyzers for RIOT. The consensus on this was that we should try to use only open-source software.

Simulations and Virtualization as part of the testing system

  • We agreed that simulation software (e.g.: Cooja) should be used in our testing setup.
  • We also agreed that we should create a docker file which contains a compiler toolchain and support utilities which are known to work well with all (or most?) boards RIOT supports. The toolchain in the docker file can / should be used for the testing system. Additionally, it gives a developer access to a toolchain which is known to work with RIOT out of the box.

Documentation and use cases

  • We agreed that we want easy to read use cases / documentation in a similar style to TinyOS's TEPs (example: TEP-102).
  • Those documents describe the API of a module, use cases and design considerations/limitations. Usage examples are also provided. Crucially, they also describe the rationale behind the module.
  • Hauke proposed that we should integrate this into the doxygen documentation.
  • We want at least all core modules to be documented in this way.
Clone this wiki locally