Skip to content
Joshua Barretto edited this page Mar 18, 2018 · 4 revisions

Welcome to the Tupai wiki!

This wiki contains tutorials, examples, design specifications and idea drafts for the Tupai kernel, architecture and associated components.

Tutorials

todo

Examples

todo

Design Specifications

todo

Ideas

HAL

  • Minimal Hardware Abstraction Layer allows for easy porting to different architectures
  • Provides core features such as interrupts, paging, low-level instruction support, etc.
  • Makes assumptions about minimal hardware features (MUST be interrupt-driven, MUST having paging support, MUST have Von Neumann architecture, etc.)
  • Coherent API, part of Tupai's modularisation effort

File I/O Architecture

  • Unified API to handle all I/O operations
  • Each process, including kernel drivers, are servers that follow the producer/consumer architecture
  • 3 distinct data access methods are visible to API: stream-driven, packet-driven and block-driven
  • Internally, each data access method lies on top of an implementation-defined block transfer method

Stream-driven I/O

  • Data is stored using a shared memory queue space, drastically reducing syscall/copying overhead
  • No such thing as 'buffering' in the POSIX sense: a context switch to the consumer automatically buffers data, since it's all shared. Buffering is left to the consumer/producer to handle rather than being an inherent artifact of the API
  • No such things as 'character streams' - streams can use items of any length under some implementation-defined limit
  • Producer locks buffer when writing, consumer locks buffer when reading

Packet-driven I/O

  • Data is transferred in a similar manner to stream-driven I/O
  • Implementation makes no synchronisation guarantees, but ordering is preserved
  • Variable-length packets are permitted up to some implementation-defined limit

Block-driven I/O

  • Implemented usually using a shared contiguous memory region between one or more processes
  • Writing process blocks reading of memory. Access safety is performed in a similar way to Rust's type system (one writer or multiple readers)

Sync handles

Pseudo file handles should exist that permit stream-driven and packet-driven synchronisation. Existing file handles may be locked to these sync handles. When this is done, reading from these locked handles is done in the order that they were written to, thereby preserving data order.

Scheduling

Due to the safe nature of the aforementioned file I/O architecture, the scheduler must be able to reason about access.

To keep things generic, using the everything-is-a-handle architecture allows for vastly simplified event handling.

Tickless mode should permit multiple thread queues: BUSY, WAIT, INIT and DEAD. 'Busy' threads are executing at full capacity, 'Wait' threads are waiting upon an event to execute, 'Init' threads are queued for creation, 'Dead' threads are queued to have their resources returned to their respective pools.

When an event occurs, a unique, deterministic identifier should be generated (possibly a combination of the handler and the event type) and hashed. The corresponding entry should be looked up in a hashtable for processes waiting upon that event. If an entry exists, the scheduler should search for the corresponding entry(s) in the waiting thread queue. When an event is handled (i.e: a thread is re-awoken) the corresponding hash entry should be decremented. If no entry exists, there are no event handlers bound to this event.

When a thread wishes to wait upon an event, the kernel generates the corresponding event identifier (as explained above) and increments the corresponding entry in the event hash table, indicating another thread waiting upon an event with that hash.

This event has provides a considerable speed up when reasoning about events in a tickless kernel architecture.

Shared libraries

Once all regions of shared memory are abstracted away by the everything-is-a-file interface, a shared library simply becomes another file that gets inserted into the address space of a thread.