Skip to content

cnyvfang/P2LDGAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

P2LDGAN

  • Official code for the "Joint Geometric-Semantic Driven Character Line Drawing Generation"(ICMR2023). Paper Link

News!

  • Fortunately this article was accepted by ACM ICMR2023 and the Camera-ready version will be released soon.
  • Our source code will be available before July 2023.

Pre-trained Models and Dataset

  • You can download our pre-trained models which using P2LDGAN with our constructed line-drawing dataset via Google Drive.
  • If you would like to use our dataset, you can send me an email indicating your name, organisation, purpose etc. and I will reply with a link to you.

Sample Results

(a) Input photo/image; (b) Ground truth; (c) Gatys; (d) CycleGAN; (e) DiscoGAN; (f) UNIT; (g) pix2pix; (h) MUNIT; (i) Our baseline; (j) Our P2LDGAN.

Cite this paper ❤️

If you use this work for a paper, please cite:

@inproceedings{fang2023p2ldgan,
author = {Fang, Cheng-Yu and Han, Xian-Feng},
title = {Joint Geometric-Semantic Driven Character Line Drawing Generation},
year = {2023},
isbn = {9798400701788},
publisher = {Association for Computing Machinery},
url = {https://doi.org/10.1145/3591106.3592216},
doi = {10.1145/3591106.3592216},
booktitle = {Proceedings of the 2023 ACM International Conference on Multimedia Retrieval},
pages = {226–233},
numpages = {8},
keywords = {Line Drawing, Joint Geometric-Semantic Driven, Generative Adversarial Network, Image Translation},
location = {Thessaloniki, Greece},
series = {ICMR '23}
}

About

Official Code for "Joint Geometric-Semantic Driven Character Line Drawing Generation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages