Skip to content

This is a CUDA parallel implementation of an optimized Run Length Encoding compression algorithm that uses an elegant pairing function.

Notifications You must be signed in to change notification settings

adolfos94/Enhanced-Run-Length-Encoding

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Enhanced Run Length Encoding

A parallel data compression algorithm for store or transmission purposes based on run length encoding and elegant pairing function. To achieve higher compression ratios, the proposed method encodes the run length encoding matrix through a pairing function. Because a pairing function is a unique and a bijective function, it is possible to recover the data without losing information. This implementation is really fast using vectors with high sizes.

CUDA

This implementation uses the template library of CUDA and lambdas of C++. Also, we implemented a struct for RLE.

    thrusth::

    #include <thrust/device_vector.h>
    #include <thrust/host_vector.h>
    #include <thrust/transform.h>
    struct RLE {
        int x; // Value
        int y; // Number of repetitions
    };

Example of use

  1. Compile
    nvcc main.cu -std=c++11 --expt-extended-lambda
  1. Generate CPU array of size 'SIZE'
    #define SIZE 10000

    thrust::host_vector<RLE> rle(SIZE);
  1. Initialize host_vector. Import your RLE vectors. For example purposes the rle vector is filled with rand values.
  for (int i = 0; i < SIZE; i++) {
    rle[i].x = rand() % 100;
    rle[i].y = rand() % 100;
  }
  1. Define a device_vector containing the run length encoder
    thrust::device_vector<RLE> d_rle = rle;
  1. Compress on GPU
    thrust::device_vector<int> arrayCompressedDevice =  gpuEncoding(d_rle);

  1. Copy the GPU to CPU for store or transmission purposes.
    thrust::host_vector<int> arrayCompressedHost = arrayCompressedDevice;
  1. Decompression on GPU
    thrust::device_vector<RLE> res_rle_gpu = gpuDecoding(arrayCompressedDevice);

  1. Copy the GPU vector to CPU. Since, this algorithm is a lossless compression algorithm, the decoded vector must be similar to the original vector.
    thrust::host_vector<RLE> arrayDecompressedHost = res_rle_gpu;

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Authors and References

  • Article "An Enhanced Run Length Encoding for Image Compression based on Discrete Wavelet Transform" *

  • Proposed Enhanced Run Length Encoding:

  • Elegant Pairing Function

About

This is a CUDA parallel implementation of an optimized Run Length Encoding compression algorithm that uses an elegant pairing function.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages