Skip to content

Releases: dmlc/dgl

v2.2.1

11 May 02:59
2a1ac58
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.2.1. 🎉🎉🎉

Major Changes

  • The supported PyTorch versions are 2.1.0/1/2, 2.2.0/1/2, 2.3.0. See install command here.
  • MiniBatch in GraphBolt is refactored: seed_nodes and node_paris are replaced with unified seeds attribute through out the pipeline. Refer to the latest examples for more details. by @yxy235
  • GraphBolt sampling is enabled in DistGL for node classification. See examples here.
  • [GraphBolt] Optimize hetero sampling on CPU by @RamonZhou in #7360
  • [GraphBolt] torch.compile() support for gb.expand_indptr. by @mfbalin in #7188
  • [GraphBolt] Make unique_and_compact deterministic by @RamonZhou in #7217, #7239
  • [GraphBolt] Hyperlink support in subgraph_sampler. by @yxy235 in #7354
  • [GraphBolt] More features of dgl.dataloading.LaborSampler in gb.LayerNeighborSampler, added layer_dependency and batch_dependency parameters. #7205, #7208, #7212, #7220 by @mfbalin
  • [GraphBolt][CUDA] Faster GPU neighbor sampling and compaction kernels. #7239, #7215 by @mfbalin
  • [GraphBolt][CUDA] Better hetero CPU&GPU performance via fused kernels. #7223, #7312 by @mfbalin
  • [GraphBolt][CUDA] GPU synchronizations eliminated throughout the sampling pipeline. #7240, #7264 by @mfbalin

Bug Fixes

  • [DistGB] revert toindex() but refine tests by @Rhett-Ying in #7197
  • [GraphBolt] PyG advanced example torch.compile() bug workaround. by @mfbalin in #7259
  • [CUDA][Bug] CSR transpose bug in CUDA 12 by @mfbalin in #7295
  • [Determinism] Enable environment var to use cusparse spmm deterministic algorithm by @TristonC in #7310

New Contributors

Full Changelog: v2.1.0...v2.2.1

v2.1.0

06 Mar 03:29
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.1.0. 🎉🎉🎉

Major Changes:

  1. CUDA backend of GraphBolt is now available. Thanks @mfbalin for the extraordinary effort. See the updated examples.
  2. PyTorch 1.13 is not supported any more. The supported PyTorch versions are 2.0.0/1, 2.1.0/1/2, 2.2.0/1.
  3. CUDA 11.6 is not supported any more. The supported CUDA versions are 11.7, 11.8, 12.1.
  4. Data loading performance improvements via pipeline parallelism in #7039 and #6954, see the new gb.DataLoader parameters.
  5. Miscellaneous operation/kernel optimizations.
  6. Add support for converting sampling output of GraphBolt to PyG data format and train with PyG models seamlessly: examples.

Bug Fixes

  • [Grapbolt] Negative node pairs should be 2D by @peizhou001 in #6951
  • [GraphBolt] Fix fanouts setting in rgcn example by @RamonZhou in #6959
  • [GraphBolt] fix random generator for shuffle among all workers by @Rhett-Ying in #6982
  • [GraphBolt] fix preprocess issue for single ntype/etype graph by @Rhett-Ying in #7011
  • [GraphBolt] Fix gpu NegativeSampler for seeds. by @yxy235 in #7068
  • [GraphBolt][CUDA] Fix link prediction early-stop. by @mfbalin in #7083

New Examples

  • [Feature] ARGO: an easy-to-use runtime to improve GNN training performance on multi-core processors by @jasonlin316 in #7003

Acknowledgement

Thanks for all your contributions.
@drivanov @frozenbugs @LourensT @Skeleton003 @mfbalin @RamonZhou @Rhett-Ying @wkmyws @jasonlin316 @caojy1998 @czkkkkkk @hutiechuan @peizhou001 @rudongyu @xiangyuzhi @yxy235

v2.0.0

12 Jan 03:51
92c8f08
Compare
Choose a tag to compare

We're thrilled to announce the release of DGL 2.0.0, a major milestone in our mission to empower developers with cutting-edge tools for Graph Neural Networks (GNNs). 🎉🎉🎉

New Package: dgl.graphbolt

In this release, we introduce a brand new package: dgl.graphbolt, which is a revolutionary data loading framework that supercharges your GNN training/inference by streamlining the data pipeline. Please refer to the documentation page for GraphBolt's overview and end2end notebooks. More end2end examples are available in github code base.

New Additions

  • A hetero-relational GCN example (#6157)
  • Add Node explanation for Heterogenous PGExplainer Impl. (#6050)
  • Add peptides structural dataset in LRGB (#6337)
  • Add peptides functional dataset in LRGB (#6363)
  • Add VOCSuperpixels dataset in LRGB (#6389)
  • Add compact operator (#6352)
  • Add COCOsuperpixel dataset (#6407)
  • Add a graphSAGE example (#6481)
  • Add CIFAR10 MNIST dataset in benchmark-gnn (#6543)
  • Add ogc method (#6437)
  • Add a LADIES example (#6560)
  • Adjusted homophily and label informativeness (#6516)

System/Examples/Documentation Enhancements

  • Update README about DGL container access from NGC (#6133)
  • Cpu docker tcmalloc (#5969)
  • Use scipy's eigs instead of numpy in lap_pe (#5855)
  • Add CMake changes from conda-forge build (#6189)
  • Upgrade googletest to v1.14.0 (#6273)
  • Fix typo in link prediction with sampling example (#6268)
  • Add sparse matrix slicing operator implementation (#6208)
  • Use torchrun instead of torch.distributed.launch (#6304)
  • Sparse sample implementation (#6303)
  • Add relabel python API (#6323)
  • Compact C++ API (#6334)
  • Fix compile warning (#6342)
  • Update Labor sampler docs, add NeurIPS acceptance (#6369)
  • Update docstring of LRGB (#6430)
  • Do not fuse neighbor sampler for 1 thread (#6421)
  • Fix graph_transformer example (#6471)
  • Adding --num_workers input parameter to the EEG_GCNN example. (#6467)
  • Update doc network_emb.py (#6559)
  • Protect temporary changes from persisting if an error occurs during the yield block (#6506)
  • Provide options for bidirectional edge (#6566)
  • Improving the MLP example. (#6593)
  • Improving the JKNET example. (#6596)
  • Avoid calling IsPinned in the coo/csr constructor from every sampling process (#6568)
  • Add tutorial documentation for graph transformer. (#6889, #6949)
  • Refactor SpatialEncoder3d. (#5894)

Bug Fixes

  • Fix cusparseCreateCsr format for cuda12 (#6121)
  • Fix a bug in standalone mode (#6179)
  • Fix extrace_archive default parameter (#6333)
  • Fix device check (#6409)
  • Return batch related ids in g.idtype (#6578)
  • Fix typo in ShaDowKHopSampler (#6587)
  • Fix issue about integer overflow (#6586)
  • Fix the lazy device copy issue of DGL node/edge features (#6564)
  • Fix num_labels to num_classes in dataset files (#6666)
  • Fix Graphormer as key in state_dict has changed (#6806)
  • Fix distributed partition issue (#6847)

Note

Windows packages are not available and will be ready soon.

Acknowledgement

DGL 2.0.0 has been achieved through the dedicated efforts of the DGL team and the invaluable contributions of our external collaborators.

@9rum @AndreaPrati98 @BarclayII @HernandoR @OlegPlatonov @RamonZhou @Rhett-Ying @SinuoXu @Skeleton003 @TristonC @anko-intel @ayushnoori @caojy1998 @chang-l @czkkkkkk @daniil-sizov @drivanov @frozenbugs @hmacdope @isratnisa @jermainewang @keli-wen @mfbalin @ndbaker1 @paoxiaode @peizhou001 @rudongyu @songqing @willarliss @xiangyuzhi @yaox12 @yxy235 @zheng-da

Your collective efforts have been key to the success of this release. We deeply appreciate every contribution, large and small, as they collectively shape and improve DGL. Thank you all for your dedication and hard work!

v1.1.3

11 Dec 09:43
6e2c0f4
Compare
Choose a tag to compare

Major changes

  • Add PyTorch 2.1.0, 2.1.1 (except windows) and the supported versions are 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1.
  • Add CUDA 12.1 and the supported versions are 11.6, 11.7, 11.8, 12.1.
  • Windows support for PyTorch 2.1.0, 2.1.1 are blocked due to a compiling issue. This will be supported as soon as the issue is resolved.

v1.1.2

15 Aug 07:31
d40a3c3
Compare
Choose a tag to compare

Major changes

  • PyTorch 1.12.0, 1.12.1 are deprecated and the supported versions are 1.13.0, 1.13.1, 2.0.0, 2.0.1.
  • CUDA 10.2, 11.3 are deprecated and the supported versions are 11.6, 11.7. 11.8.
  • C++ standard used in build is upgraded to 17.
  • Several performance improvements such as #5885, #5924 and so on.
  • Multiple examples are updated for better readability such as #6035, #6036 and so on.
  • A few bug fixes such as #6044, #6001 and so on.

v1.1.1

27 Jun 03:26
Compare
Choose a tag to compare

What's new

  • Add support for PyTorch 2.0.1.
  • Fix several bugs such as #5872 for DistDGL, #5754 for dgl.khop_daj() and so on.
  • Remove several unused third-party libraries such as xbyak, tvm.
  • A few performance improvements such as #5508, #5685.

v1.1.0

05 May 08:50
Compare
Choose a tag to compare

What's new

  • Sparse API improvement
  • Datasets for evaluating graph transformers and graph learning under heterophily
  • Modules and utilities, including Cugraph convolution modules and SubgraphX
  • Graph transformer deprecation
  • Performance improvement
  • Extended BF16 data type to support 4th Generation Intel® Xeon® Scalable Processors (#5497)

Detailed breakdown

Sparse API improvement (@czkkkkkk )

SparseMatrix class

  • Merge DiagMatrix class into SparseMatrix class, where the diagonal matrix is stored as a sparse matrix and inherits all the operators from sparse matrix. (#5367)
  • Support converting DGLGraph to SparseMatrix.g.adj(self, etype=None, eweight_name=None) returns the sparse matrix representation of the DGL graph g on the edge type etype and edge weight eweight_name. (#5372)
  • Enable zero-overhead conversion between Pytorch sparse tensors and SparseMatrix via dgl.sparse.to_torch_sparse_coo/csr/csc and dgl.sparse.from_torch_sparse. (#5373)

SparseMatrix operators

  • Support element-wise multiplication on two sparse matrices with different sparsities, e.g., A * B. (#5368)
  • Support element-wise division on two sparse matrices with the same sparsity, e.g., A / B. (#5369)
  • Support broadcast operators on a sparse matrix and a 1-D tensor via dgl.sparse.broadcast_add/sub/mul/div. (#5370)
  • Support column-wise softmax. (#5371)

SparseMatrix examples

  • Example for Heterogeneous Graph Attention Networks (#5568, @mufeili )

Datasets

Modules and utilities

Deprecation (#5100, @rudongyu )

  • laplacian_pe is deprecated and replaced by lap_pe
  • LaplacianPE is deprecated and replaced by LapPE
  • LaplacianPosEnc is deprecated and replaced by LapPosEncoder
  • BiasedMultiheadAttention is deprecated and replaced by BiasedMHA

Performance improvement

Speedup the CPU to_block function in graph sampling. (#5305, @peizhou001 )

  • Add a concurrent hash map to speed up the id mapping process by leveraging multi-thread capability (#5241, #5304).
  • Accelerate the expensive to_block by using the new hash map, improve the performance by ~2.5x on average and more when the batch size is large.

Breaking changes

  • Since the new .adj() function of DGLGraph produces a SparseMatrix, the original .adj(self, transpose=False, ctx=F.cpu(), scipy_fmt=None, etype=None) is renamed as .adj_external, returning the sparse format from external packages such as Scipy and Pytorch. (#5372)

v1.0.2

31 Mar 09:24
Compare
Choose a tag to compare

What's new

  • Added support to CUDA 11.8. Please install with
    pip install dgl -f https://data.dgl.ai/wheels/cu118/repo.html
    conda install -c dglteam/label/cu118 dgl
    
  • Added support to Python 3.11
  • Added support to PyTorch 2.0

v1.0.1

21 Feb 07:15
Compare
Choose a tag to compare

What's new

  • Enable dgl.sparse on Mac and Windows.
  • Fixed several bugs.

v1.0.0

30 Jan 07:07
Compare
Choose a tag to compare

v1.0.0 release is a new milestone for DGL. 🎉🎉🎉

New Package: dgl.sparse

In this release, we introduced a brand new package: dgl.sparse, which allows DGL users to build GNNs in Sparse Matrix paradigm. We provided Google Colab tutorials on dgl.sparse package from getting started on sparse APIs to building different types of GNN models including Graph Diffusion, Hypergraph and Graph Transformer, and 10+ examples of commonly used models in github code base.

NOTE: this feature is currently only available in Linux.

New Additions

  • A new example of SEAL+NGNN for OGBL datasets (#4550, #4772)
  • Add DeepWalk module (#4562)
  • A new example of BiPointNet for modelnet40 dataset (#4434)
  • Add Transformers related modules: Metapath2vec (#4660), LaplacianPosEnc (#4750), DegreeEncoder (#4742), ToLevi (#4884), BiasedMultiheadAttention (#4916), PathEncoder (#4956), GraphormerLayer (#4959), SpatialEncoder & SpatialEncoder3d (#4991)
  • Add Graph Positional Encoding Ops: double_radius_node_labeling (#4513), shortest_dist (#4799)
  • Add a new sample algorithm: (La)yer-Neigh(bor) sampling (#4668)

System Enhancement

  • Support PyTorch CUDA Stream (#4503)
  • Support canonical edge types in HeteroGraphConv (#4440)
  • Reduce Memory Consumption in Distributed Training Example (#4558)
  • Improve the performance of is_unibipartite (#4556)
  • Add options for padding and eigenvalues in Laplacian positional encoding transform (#4628)
  • Reduce startup overhead for dist training (#4735)
  • Add Heterogeneous Graph support for GNNExplainer (#4401)
  • Enable sampling with edge masks on homogeneous graph (#4748)
  • Enable save and load for Distributed Optimizer (#4752)
  • Add edge-wise message passing operators u_op_v (#4801)
  • Support bfloat16 (bf16) (#4648)
  • Accelerate CSRSliceMatrix<kDGLCUDA, IdType> by leveraging hashmap (#4924)
  • Decouple size of node/edge data files from nodes/edges_per_chunk entries in the metadata.json for Distributed Graph Partition Pipeline(#4930)
  • Canonical etypes are always used during partition and loading in distributed DGL(#4777, #4814).
  • Add parquet support for node/edge data in Distributed Partition Pipeline.(#4933)

Deprecation & Cleanup

Dependency Update

Starting from this release, we will drop support for CUDA 10.1 and 11.0. On windows, we will further drop support for CUDA 10.2.

Linux: CentOS 7+ / Ubuntu 18.04+

PyTorch ver. \ CUDA ver. 10.2 11.3 11.6 11.7
1.12  
1.13    

Windows: Windows 10+/Windows server 2016+

PyTorch ver. \ CUDA ver. 11.3 11.6 11.7
1.12  
1.13  

Bugfixes

  • Fix a bug related to EdgeDataLoader (#4497)
  • Fix graph structure corruption with transform (#4753)
  • Fix a bug causing UVA cannot work on old GPUs (#4781)
  • Fix NN modules crashing with non-FP32 inputs (#4829)

Installation

The installation URL and conda repository has changed for CUDA packages. Please use the following:

# If you installed dgl-cuXX pip wheel or dgl-cudaXX.X conda package, please uninstall them first.
pip install dgl -f https://data.dgl.ai/wheels/repo.html   # for CPU
pip install dgl -f https://data.dgl.ai/wheels/cuXX/repo.html   # for CUDA, XX = 102, 113, 116 or 117
conda install dgl -c dglteam   # for CPU
conda install dgl -c dglteam/label/cuXX   # for CUDA, XX = 102, 113, 116 or 117