Skip to content

avimanyu786/Hands-On-GPU-Computing-with-Python

 
 

Repository files navigation

This repository includes companion code for the book that was coded and developed at GizmoQuest Computing Lab to explore the capabilities of GPUs for solving high performance computational problems:

Hands-On GPU Computing with Python

Explore GPU-enabled programmable environments for machine learning, scientific and other diverse applications with Anaconda by using PyCUDA, PyOpenCL, CuPy, Numba, Tensorflow, Keras and PyTorch.

Key Features

Understand effective synchronization strategies for faster processing using GPUs

Understand parallel processing with PyCUDA, PyOpenCL, CuPy, Numba, Tensorflow, Keras and PyTorch

Learn to use CUDA libraries like CuDNN for deep learning on GPUs

Book Description

GPUs are proving to be excellent general purpose-parallel computing solutions for high performance tasks such as deep learning and scientific computing.

This book will be your guide to getting started with GPU computing. It will start with introducing GPU computing and explain the architecture and programming models for GPUs. You will learn, by example, how to perform GPU programming with Python, and you’ll look at using integrations such as PyCUDA, PyOpenCL, CuPy, Numba, Tensorflow, PyTorch with Anaconda for various tasks such as machine learning and data mining. Going further, you will get to grips with GPU work flows, management, and deployment using modern containerization solutions. Toward the end of the book, you will get familiar with the principles of distributed computing for training machine learning models and enhancing efficiency and performance.

By the end of this book, you will be able to set up a GPU ecosystem from scratch to run complex applications and data models that demand great processing capabilities, and be able to efficiently manage memory to compute your application effectively and quickly.

What you will learn

Utilize Python libraries and frameworks for GPU acceleration

Set up a GPU-enabled programmable machine learning environment on your system with Anaconda

Deploy your machine learning system on cloud containers with illustrated examples

Explore PyCUDA and PyOpenCL and compare them with platforms such as CUDA, OpenCL and ROCm.

Porting CUDA to ROCm with HIPify

Perform data mining tasks with machine learning models on GPUs

Extend your knowledge of GPU computing in scientific applications

Who this book is for

Data Scientists, Machine Learning enthusiasts, Professionals, Computer Science Students and Application Scientists who want to get started with GPU computation and perform the complex tasks with low-latency. Intermediate knowledge of Python programming is assumed.

Software List

Software required OS required
PyCharm Community Edition, PyCharm Educational Edition, PyCharm for Anaconda Community Edition, PyCharm Professional Edition, PyCharm for Anaconda Professional Edition, PyDev, Jupyter Notebook, Jupyter Lab, Eric, CUDA, ROCm, Anaconda, CuPy, Numba, Google Colaboratory, Tensorflow, PyTorch, DeepChem Linux (preferably Ubuntu)

Table of Contents

Introduction to GPU computing
Designing A GPU Computing Strategy
Setting up a GPU Computing Platform with NVIDIA and AMD
Fundamentals of GPU programming
Setting up your environment for GPU programming
Working with PyCUDA
Working with PyOpenCL
Working with Anaconda, CuPy and Numba
Containerization on GPU enabled platforms
Machine Learning on GPUs: Use cases
GPU Acceleration for Scientific Applications using DeepChem

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 80.4%
  • Python 15.1%
  • C++ 3.2%
  • Cuda 1.2%
  • C 0.1%