Skip to content
This repository has been archived by the owner on Nov 25, 2018. It is now read-only.

Commit

Permalink
Finalize release 1.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
jwbuurlage committed Jan 18, 2017
2 parents 9be368e + 5b40a47 commit 0d4a853
Show file tree
Hide file tree
Showing 28 changed files with 1,680 additions and 705 deletions.
15 changes: 15 additions & 0 deletions CHANGELOG.md
@@ -1,5 +1,20 @@
# Changelog

## 1.0.0 - 2017-18-01

### Added
- BSP variable list is stored distributed over all cores instead of in external memory
- Implement `bsp_pop_reg`
- New streaming API

### Fixed
- `bsp_begin` no longer uses divide and modulus operator which take up large amounts of memory
- `bsp_begin` no longer initializes coredata to zero since this is already done in the loader
- `bsp_end` no longer executes TRAP so that `main` can finish properly

### Removed


## 1.0.0-beta.2 - 2016-04-21

### Added
Expand Down
5 changes: 4 additions & 1 deletion Makefile
Expand Up @@ -21,6 +21,7 @@ E_SRCS = \
e_bsp_mp.c \
e_bsp_memory.c\
e_bsp_buffer.c \
e_bsp_buffer_deprecated.c \
e_bsp_dma.c

E_ASM_SRCS = \
Expand All @@ -40,8 +41,10 @@ HOST_SRCS = \
host_bsp.c \
host_bsp_memory.c \
host_bsp_buffer.c \
host_bsp_buffer_deprecated.c \
host_bsp_mp.c \
host_bsp_utility.c
host_bsp_utility.c \
host_bsp_debug.c

#First include directory is only for cross-compiling
INCLUDES = -I/usr/include/esdk \
Expand Down
7 changes: 2 additions & 5 deletions README.md
Expand Up @@ -21,7 +21,7 @@ In particular this library has been implemented and tested on the [Parallella](

int main(int argc, char **argv)
{
bsp_init("ecore_program.srec", argc, argv);
bsp_init("ecore_program.elf", argc, argv);
bsp_begin(16);
ebsp_spmd();
bsp_end();
Expand Down Expand Up @@ -92,7 +92,7 @@ HOST_LIB_NAMES = -lhost-bsp -le-hal -le-loader
E_LIB_NAMES = -le-bsp -le-lib
all: bin bin/host_program bin/ecore_program.srec
all: bin bin/host_program bin/ecore_program.elf
bin:
@mkdir -p bin
Expand All @@ -105,9 +105,6 @@ bin/ecore_program.elf: src/ecore_code.c
@echo "CC $<"
@e-gcc $(CFLAGS) -T ${ELDF} $(INCLUDES) -o $@ $< $(E_LIBS) $(E_LIB_NAMES)
bin/%.srec: bin/%.elf
@e-objcopy --srec-forceS3 --output-target srec $< $@
clean:
rm -r bin
```
Expand Down
74 changes: 16 additions & 58 deletions docs/api_reference.rst
Expand Up @@ -80,16 +80,10 @@ ebsp_hpmove
.. doxygenfunction:: ebsp_hpmove
:project: ebsp_host

ebsp_create_down_stream
^^^^^^^^^^^^^^^^^^^^^^^
bsp_stream_create
^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_create_down_stream
:project: ebsp_host

ebsp_create_up_stream
^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_create_up_stream
.. doxygenfunction:: bsp_stream_create
:project: ebsp_host

ebsp_write
Expand Down Expand Up @@ -251,68 +245,32 @@ bsp_hpmove
.. doxygenfunction:: bsp_hpmove
:project: ebsp_e

ebsp_send_up
^^^^^^^^^^^^
bsp_stream_open
^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_send_up
.. doxygenfunction:: bsp_stream_open
:project: ebsp_e

ebsp_move_chunk_down
^^^^^^^^^^^^^^^^^^^^
bsp_stream_close
^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_move_chunk_down
.. doxygenfunction:: bsp_stream_close
:project: ebsp_e

ebsp_move_chunk_up
bsp_stream_move_up
^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_move_chunk_up
.. doxygenfunction:: bsp_stream_move_up
:project: ebsp_e

ebsp_move_down_cursor
^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_move_down_cursor
:project: ebsp_e

ebsp_reset_down_cursor
^^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_reset_down_cursor
:project: ebsp_e

ebsp_open_up_stream
^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_open_up_stream
:project: ebsp_e

ebsp_open_down_stream
^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_open_down_stream
:project: ebsp_e

ebsp_close_up_stream
bsp_stream_move_down
^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_close_up_stream
:project: ebsp_e

ebsp_close_down_stream
^^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_close_down_stream
:project: ebsp_e

ebsp_set_up_chunk_size
^^^^^^^^^^^^^^^^^^^^^^

.. doxygenfunction:: ebsp_set_up_chunk_size
.. doxygenfunction:: bsp_stream_move_down
:project: ebsp_e

bsp_abort
^^^^^^^^^
bsp_stream_seek
^^^^^^^^^^^^^^^

.. doxygenfunction:: bsp_abort
.. doxygenfunction:: bsp_stream_seek
:project: ebsp_e
4 changes: 2 additions & 2 deletions docs/conf.py
Expand Up @@ -70,7 +70,7 @@

# General information about the project.
project = 'Epiphany BSP'
copyright = '2015, Coduin'
copyright = '2015-2017, Coduin'
author = 'Coduin'

# The version info for the project you're documenting, acts as replacement for
Expand All @@ -80,7 +80,7 @@
# The short X.Y version.
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '1.0-beta'
release = '1.0'

# google analytics ID
googleanalytics_id = 'UA-59249373-1'
Expand Down
120 changes: 50 additions & 70 deletions docs/streaming.rst
Expand Up @@ -9,127 +9,107 @@ Streaming

When dealing with problems that involve a lot of data such as images or large matrices, it is often the case that the data for the problem does not fit on the combined local memory of the Epiphany processor. In order to work with the data we must then use the larger (but much slower) external memory, which slows the programs down tremendously.

For these situations we provide a *streaming* mechanism. When writing your program to use streams, it will work on smaller chunks of the problem at any given time -- such that the data currently being treated is always local to the core. The EBSP library prepares the next chunk to work on while the previous chunk is being processed such that there is minimal downtime because the Epiphany cores are waiting for the slow external memory.
For these situations we provide a *streaming* mechanism. When writing your program to use streams, it will work on smaller tokens of the problem at any given time -- such that the data currently being treated is always local to the core. The EBSP library prepares the next token to work on while the previous token is being processed such that there is minimal downtime because the Epiphany cores are waiting for the slow external memory.

Making and using down streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

There are two types of streams, *up* and *down* streams. A *down* stream contains data to be processed by an Epiphany core, while an *up* stream contains results from computations performed by the Epiphany core. Every stream (both up and down) has a *target processor*, *total size* and a *chunk size*. The target processor is simply the processor id of the core that should receive the content of the stream. The total size is the total number of bytes of the entire set of data. This set of data then gets partitioned into chunks consisting of the number of bytes set by the chunk size. This size need not be constant (i.e. it may vary over a single stream), but for our discussion here we will assume that it is constant.
A stream contains data to be processed by an Epiphany core, and can also be used to obtain results from computations performed by the Epiphany core. Every stream has a *total size* and a *token size*. The total size is the total number of bytes of the entire set of data. This set of data then gets partitioned into tokens consisting of the number of bytes set by the token size. This size need not be constant (i.e. it may vary over a single stream), but for our discussion here we will assume that it is constant.

A stream is created before the call to ``ebsp_spmd`` on the host processor. The host prepares the data to be processed by the Epiphany cores, and the EBSP library then performs the necessary work needed for each core to receives its chunk. Note that this data is copied efficiently to the external memory upon creation of the stream, so that the user data should be stored in the ordinary RAM, e.g. allocated by a call to ``malloc``. A stream is created as follows::
A stream is created before the call to ``ebsp_spmd`` on the host processor. The host prepares the data to be processed by the Epiphany cores, and the EBSP library then performs the necessary work needed for each core to receives its token. Note that this data is copied efficiently to the external memory upon creation of the stream, so that the user data should be stored in the ordinary RAM, e.g. allocated by a call to ``malloc``. A stream is created as follows::

// on the host
// (on the host)
int count = 256;
int count_in_chunk = 32;
int count_in_token = 32;
float* data = malloc(count * sizeof(float));
// ... fill data
for (int s = 0; s < bsp_nprocs(); s++) {
ebsp_create_down_stream(&data, s, count * sizeof(float),
count_in_chunk * sizeof(float));
}

This will create ``bsp_nprocs()`` identical streams containing user data, one for each core. These streams are chopped up in ``256/32 = 8`` chunks. If you want to use these streams in the kernel you need to *open* them and *move chunks* from a stream to the local memory. Every stream you create on the host gets is identified by the order in which they are created. For example, the stream we created above will obtain the id ``0`` on every core. A second stream (regardless of whether it is up or down) will be identified with ``1``, etc. *These identifiers are shared between up and down streams, but not between cores*. Opening a stream is done by using this identifier::
bsp_stream_create(count * sizeof(float), count_in_token * sizeof(float), data);

// in the kernel
float* address = NULL;
ebsp_open_down_stream(&(void*)address, // a pointer to the address store
0); // the stream identifier
This will create a stream containing user data. This stream is chopped up in ``256/32 = 8`` tokens. If you want to use this streams in the kernel of a core you need to *open* it and *move tokens* from a stream to the local memory. Every stream you create on the host gets is identified by the order in which they are created, starting from index ``0``. For example, the stream we created above will obtain the id ``0``. A second stream (regardless of whether it is up or down) will be identified with ``1``, etc. *These identifiers are shared between cores*. Opening a stream is done by using this identifier, for example, to open a stream with identifier ``3``::

After this call, address will contain the location in the local memory of the first chunk, but the data is not necessarily there yet (it might still be copying). To ensure that the data has been received we *move* a chunk::
bsp_stream mystream;
if(bsp_stream_open(&mystream, 3)) {
// ...
}

int double_buffer = 1;
ebsp_move_chunk_down(&(void*)address, 0, double_buffer);
After this call, the stream will start copying data to the core, but the data is not necessarily there yet (it might still be copying). A stream can only be opened by *a single core at a time*. To access this data we *move* a token::

The first two arguments are identical to those of ``ebsp_open_down_stream``. The ``double_buffer`` argument gives you the option to start writing the next chunk to local memory (using the DMA engine), while you process the current chunk that just moved down. This can be done simultaneously to your computations, but will take up twice as much memory. It depends on the specific situation whether double_buffered mode should be turned on or off. Subsequent blocks are obtained using repeated calls to ``ebsp_move_chunk_down``.
// Get some data
void* buffer = NULL;
bsp_stream_move_down(&mystream, &buffer, 0);
// The data is now in buffer

If you want to use a chunk multiple times at different stages of your algorithm, you need to be able to instruct EBSP to change which chunk you want to obtain. Internally the EBSP system has a *cursor* for each stream which points to the next chunk that should be obtained. You can modify this cursor using the following two functions::
The first argument is the stream object that was filled using ``bsp_stream_open``. The second argument is a pointer to a pointer that will be set to the data location. The final ``double_buffer`` argument, gives you the option to start writing the next token to local memory (using the DMA engine), while you process the current token that you just moved down. This can be done simultaneously to your computations, but will take up twice as much memory. It depends on the specific situation whether double buffered mode should be turned on or off. Subsequent blocks are obtained using repeated calls to ``bsp_stream_move_down``.

// reset the cursor of the first stream to its first chunk
ebsp_reset_down_cursor(0);
If you want to use a token multiple times at different stages of your algorithm, you need to be able to instruct EBSP to change which token you want to obtain. Internally the EBSP system has a *cursor* for each stream which points to the next token that should be obtained. You can modify this cursor using the following two functions::

// move the cursor of the first stream forward by 5 chunks
ebsp_move_down_cursor(0, 5);
// move the cursor of the stream forward by 5 tokens
bsp_stream_seek(&mystream, 5);

// move the cursor of the first stream back by 3 chunks
ebsp_move_down_cursor(0, -3);
// move the cursor of the stream back by 3 tokens
bsp_stream_seek(&mystream, -3);

Note that this gives you random access inside your streams. Therefore our streaming approach should actually be called *pseudo-streaming*, because formally streaming algorithms only process chunks in a stream a constant number of times. However on the Epiphany we can provide random-access in our streams, leading to different semantics such as moving the cursor.
When you exceed the bounds of the stream, it will be set to the final or first token respectively. Note that this gives you random access inside your streams. Therefore our streaming approach should actually be called *pseudo-streaming*, because formally streaming algorithms only process tokens in a stream a constant number of times. However on the Epiphany we can provide random-access in our streams, opening the door to different semantics such as moving the cursor.

Moving results back up
^^^^^^^^^^^^^^^^^^^^^^

Up streams work very similar to down streams, however no data has to be supplied by the host since it is generated by the Epiphany. We construct an up stream in the following way::

// on the host
// .. create up stream (see above)
void* upstream_data = malloc(sizeof(void*) * bsp_nprocs());
for (int s = 0; s < bsp_nprocs(); s++) {
upstream_data[s] = ebsp_create_up_stream(
s, chunks * chunksize, chunks);
}
A stream can also be used to move results back up, for example::

The array ``upstream_data`` holds pointers to the generated data by each processor. In the kernel you can *open* these streams similarly to down streams::
int* buffer1 = ebsp_malloc(100 * sizeof(int));
int* buffer2 = ebsp_malloc(100 * sizeof(int));
int* curbuffer = buffer1;
int* otherbuffer = buffer2;

// in the kernel
float* up_address = NULL;
ebsp_open_up_stream(&(void*)up_address, // a pointer to the address store
1); // the stream identifier
ebsp_stream s;
bsp_stream_open(&s, 0); // open stream 0
while (...) {
// Fill curbuffer
for (int i = 0; i < 100; i++)
curbuffer[i] = 5;

Note that this stream has the identifier ``1`` on each core. The up_address now points to a portion of *local memory* that you can fill with data from the kernel. To move a chunk of results up we use::

int double_buffer = 1;
ebsp_move_chunk_up(&(void*)up_address, 1, double_buffer);
// Send up
bsp_stream_move_up(&s, curbuffer, 100 * sizeof(int), 0);
// Use other bufferfer
swap(curbuffer, otherbuffer);
}
ebsp_free(buffer1);
ebsp_free(buffer2);

If we use a double buffer, then after this call ``up_address`` will point to a new portion of memory, such that you can continue your operations while the previous chunk is being copied up. Again, this uses more local memory, but does allow you to continue processing the next chunk.
Here, we have two buffers containing data. While filling one of the buffers with data, we move the other buffer up. We do this using the ``bsp_stream_move_up`` function which has as arguments respectively: the stream handle, the data to send up, the size of the data to send up, and a flag that indicates whether we want to *wait for completion*. In this case, we do not wait, but use two buffers to perform computations and to send data up to the host simulatenously.

Closing streams
^^^^^^^^^^^^^^^

The EBSP stream system allocates buffers for you on the cores. When you are done with a stream you should tell the EBSP system by calling::

ebsp_close_down_stream(0);
ebsp_close_up_stream(0);
bsp_stream_close(&my_stream);

which will free the buffers for other use.
which will free the buffers for other use, and allow other cores to use the streams.

Interface
------------------

Host
^^^^

.. doxygenfunction:: ebsp_create_down_stream
:project: ebsp_host

.. doxygenfunction:: ebsp_create_up_stream
.. doxygenfunction:: bsp_stream_create
:project: ebsp_host

Epiphany
^^^^^^^^

.. doxygenfunction:: ebsp_open_down_stream
:project: ebsp_e

.. doxygenfunction:: ebsp_open_up_stream
:project: ebsp_e

.. doxygenfunction:: ebsp_close_down_stream
:project: ebsp_e

.. doxygenfunction:: ebsp_close_up_stream
:project: ebsp_e

.. doxygenfunction:: ebsp_move_chunk_up
.. doxygenfunction:: bsp_stream_open
:project: ebsp_e

.. doxygenfunction:: ebsp_move_chunk_down
.. doxygenfunction:: bsp_stream_close
:project: ebsp_e

.. doxygenfunction:: ebsp_move_down_cursor
.. doxygenfunction:: bsp_stream_move_up
:project: ebsp_e

.. doxygenfunction:: ebsp_reset_down_cursor
.. doxygenfunction:: bsp_stream_move_down
:project: ebsp_e

.. doxygenfunction:: ebsp_set_up_chunk_size
.. doxygenfunction:: bsp_stream_seek
:project: ebsp_e

0 comments on commit 0d4a853

Please sign in to comment.