Skip to content

SYCL tutorial for the 2022 Compute Ontario Summer School workshop on programming GPUs

License

Notifications You must be signed in to change notification settings

kris-rowe/coss-2022-sycl-tutorial

Repository files navigation

2022 Compute Ontario Summer School SYCL Tutorial

build

This repository contains notes and source code for the SYCL Tutorial presented virtually on July 8th, 2022 as part of the Programming GPUs workshop during the 2022 Compute Ontario Summer School.

Getting Started

Requirements

  • GNU Make
  • C++17 compiler
  • SYCL 2020 implementation

All required software already installed on Digital Research Alliance of Canada systems and Intel Devcloud as described below.

If a SYCL 2020 implementation is not installed on your current system, one can be built from Intel's LLVM fork on GitHub. Instructions for building and setting up the Intel LLVM compiler can be found here.

Digital Research Alliance of Canada Systems

On DRA Canada systems, a SYCL 2020 implementation is available through two globally installed modules.

Using SYCL on GPUs

The dpc++/2022-06 module provides a build of the open-source Intel LLVM compilers with the CUDA plug-in enabled. Using the clang++ compiler with the flags -fsycl -fsycl-targets=nvptx64-nvidia-cuda, SYCL applications can be built and run on NVIDIA GPUs—like the P100 and V100 GPUs in Graham.

To load this module call

$ module load cuda/11.4 
$ module load dpc++/2022-06
Using SYCL on CPUs

The Intel oneAPI Toolkit compilers are included in the intel/2022.1.0 module. Using the icpx compiler with the flag -fsycl, SYCL applications can be built and run on Intel CPUs via the Intel OpenCL runtime.

To load this module call

$ module load intel/2022.1.0

Intel DevCloud

Intel DevCloud provides free access to various Intel CPUs, GPUs, FPGAs, and other accelerators. New users can sign-up for access here. Once signed-up, follow the instructions for connecting via ssh.

To clone the tutorial from GitHub and build example codes, it is first necessary to launch a job on one of the compute nodes. For example, an interactive session on a GPU compute node can be started with the command

$ qsub -I -l nodes=1:gpu:ppn=2

Various software packages are provided through environment modules. The latest Intel oneAPI toolkit can be loaded by calling

$ module load /glob/module-files/intel-oneapi/latest

Build

The examples and exercises directories contain makefiles to build their corresponding codes. By default, it is assumed that the LLVM clang++ compiler will be used to build code for NVIDIA GPUs.

Each example is contained in a single .cpp file, for which the makefile will generate an executable with the same name. Examples can be built individually, or all at once by calling

$ make -j all

Run

Example programs do not take commandline arguments and can be run by calling

$ ./example-name

For any additional instructions on running the exercise codes, see the corresponding README.

Community

Support

Need help? Ask a question in the Q&A discussions category.

Feedback

To provide feedback, participate in the polls or start a conversation in the ideas discussion categories.

Contributing

Bugs & Corrections

Found a bug, spelling mistake, or other error? Open an issue and be sure to tag it with the corresponding category.

Sharing Your Work

Have an interesting solution to one of the exercises or other code related to the tutorial that you would like to share? Create a post in the Show and tell discussions category.

Development

If you are interested in helping to further develop this tutorial please reach out to Kris Rowe.

Code of Conduct

All discussion and other forms of participation related to this project should be consistent with Argonne's Core Values of respect, integrity, and teamwork.

Acknowledgements

This work was supported by Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.

A shout-out to Thomas Applencourt (@TApplencourt) for providing feedback on content and for catching numerous spelling/coding errors.

License

This project is available under a MIT License