This repository is an upgrade version of Anime colorization I've done previously by using Keras. For previous works, visit: https://github.com/dabsdamoon/Anime-Colorization
์ด๋ฒ repository๋ keras๋ฅผ ์ด์ฉํ์ฌ ๊ตฌ์ฑํ ์ด์ Anime colorization์ ์ ๊ทธ๋ ์ด๋ ๋ฒ์ ์ ๋๋ค. ์ด์ ๊ฒฐ๊ณผ๋ฌผ๊ณผ ๊ด๋ จํด์๋ ์ด ๋งํฌ๋ฅผ ์ฐธ์กฐํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค (https://github.com/dabsdamoon/Anime-Colorization).
(1) https://en.wikipedia.org/wiki/CIELAB_color_space
One can obtain the dataset used from: https://www.kaggle.com/mylesoneill/tagged-anime-illustrations#danbooru-metadata.zip. Since the size of danbooru image dataset is too big, only moeimouto-faces.zip dataset has been used. Notice that in this time I only selected images without background (white background) so that the model can detect facial parts more specifically. Same as the previous repo, I've converted RGB image to LAB image and use L channel for input and AB channel as output.
๋ฐ์ดํฐ๋ ๋ค์์ ๋งํฌ๋ฅผ ์ฐธ์กฐํ์ต๋๋ค: https://www.kaggle.com/mylesoneill/tagged-anime-illustrations#danbooru-metadata.zip. ์ ๋งํฌ์ ์กด์ฌํ๋ ๋ ๊ฐ์ ๋ฐ์ดํฐ ์ค ํ๋์ธ danbooru dataset์ ์ฌ์ด์ฆ๊ฐ ๋๋ฌด ํฐ ๊ด๊ณ๋ก, moeimouto-face.zip ๋ฐ์ดํฐ์ ๋ง ์ฌ์ฉํ์์ต๋๋ค. ๋ํ, ์ผ๊ตด์ ๊ฐ ๋ถ๋ถ๋ค์ ์ข ๋ ์ ๊ตฌ๋ถํ๊ธฐ ์ํด์ ๋ฐฐ๊ฒฝ์ด ์๋ (ํ์์ ๋ฐฐ๊ฒฝ) ์ด๋ฏธ์ง๋ค๋ง ๊ณจ๋ผ์ ์ฌ์ฉํ์์ต๋๋ค. ์ด์ repo์ ๋ง์ฐฌ๊ฐ์ง๋ก, colorization์ ์ํด์ RGP ์ด๋ฏธ์ง๋ฅผ LAB ์ด๋ฏธ์ง๋ก ๋ณํ ํ, L channel์ input, AB channel์ output์ผ๋ก ํ๋ ๋ชจ๋ธ์ ๊ตฌ์ฑํ์์ต๋๋ค.
After reviewing previous repo, I decided to make more clear definition of objective. My objective is
Note that I exclusively tried to use GAN since I want to color the gray image into many different color images. The example similar to my objective can be found in League of Legends, where the game sells chroma packs, a original scheme with different colorizations. In regular supervised learning method, however, one grayscale should have deterministic colorization label in order to train algorithms, and the trained algorithm would yield only the given deterministic colorizations. GAN algorithm, on the other hand, is semi-supervised learning method; in other words, the trained generator from GAN would yield colorization results that seem to be fit into the distribution of colorized images, not the specific colorization. Thus, I treid to use GAN for this project, and gave different noises for testing to observe how the trained generator colorizes a gray image differently.์ด์ repo๋ฅผ ๋ฆฌ๋ทฐํด๋ณธ ๋ค, ๋ณธ project์ ๋ชฉ์ ์ ์ข ๋ ๋ช ํํ๊ฒ ํด์ผํ ๊ฒ ๊ฐ์์ต๋๋ค. ์ ๋ชฉ์ ์
์ ๋๋ค. v0.2 repo๋ฅผ ๋ณด์๋ฉด ์ฑ์์ ์ํด GAN ๋ง์ ์ฌ์ฉํ๋๋ฐ, ์ด๋ ํ๋์ ํ๋ฐฑ ์ด๋ฏธ์ง๋ก ์ฌ๋ฌ๊ฐ์ง์ ์ฑ์๋ ์ด๋ฏธ์ง๋ฅผ ๋ง๋ค๊ณ ์ถ์๊ธฐ ๋๋ฌธ์ ๋๋ค. ์ ๋ชฉ์ ๊ณผ ์ ์ฌํ๊ฒ๋ LOL ๊ฒ์์์ ๊ธฐ๋ณธ ์คํจ์ ๋ค์ํ๊ฒ ์์น ํ ๋ฒ์ ์ธ chroma pack์ ์๋ก ๋ค ์ ์๊ฒ ๋ค์. ํ์ง๋ง ๊ธฐ์กด์ supervised learning์์๋ ํ๋์ ํ๋ฐฑ ์ด๋ฏธ์ง๊ฐ ์ ํด์ง ์์ ๊ฐ์ง๊ณ ์์ด์ผ์ง ๋ชจ๋ธ์ ํ์ตํ ์ ์๊ณ , ๊ทธ๋ ๊ฒ ํ์ต๋ ๋ชจ๋ธ์ ์ฃผ์ด์ง ํ๋ฐฑ ์ด๋ฏธ์ง๋ฅผ ์ ํด์ง ์ด๋ฏธ์ง๋ก๋ฐ์๋ ์์น ํ์ง ๋ชปํฉ๋๋ค. ์ด์ ๋ฐ๋๋ก, GAN semi-supervised learning ๋ฐฉ๋ฒ์ ๋๋ค. GAN์ ํ์ต๋ generator๋ ํ๋์ ํน์ ํ ์ฑ์ ์ด๋ฏธ์ง๊ฐ ์๋ ํ์ต์ ์ฌ์ฉ๋ ์ฑ์ ์ด๋ฏธ์ง๋ค์ ๋ถํฌ์ ํฌํจ๋ ๋งํ ์ ์ฌํ ์ฑ์ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ ๊ฒ์ ๋๋ค. ๋ฐ๋ผ์, ์ ๋ GAN์ ์ด๋ฒ ํ๋ก์ ํธ์ ์ฌ์ฉํ์๊ณ , generator๋ฅผ ํ ์คํธ ํ ๋ ๋ค๋ฅธ noise ๊ฐ๋ค์ ์ฃผ์ด์ ํ์ต๋ generator๊ฐ ๊ฐ๊ฐ ์ด๋ค ๋ค๋ฅธ ์์น ์ด๋ฏธ์ง๋ฅผ ๋ง๋๋์ง ๊ด์ฐฐํ์์ต๋๋ค.(https://na.leagueoflegends.com/en/news/champions-skins/skin-release/change-it-chroma-packs)
(1) https://github.com/eriklindernoren/Keras-GAN/blob/master/dcgan/dcgan.py
(2) https://github.com/kongyanye/cwgan-gp/blob/master/cwgan_gp.py
Since I've explained about GAN in previous repo, I'll skip the explanation. It seemed that previous repo did not reveal the true power of GAN, so I tried to apply GAN again for this colorization project, hoping the result gets better in this time. Codes I've referenced are from (1) and (2). Also, many colorization projects with GAN model use either ResNet or U-Net architecture for generator. After some experiments, it seems for me that U-Net architecture works better, so I decided to use U-Net architecture (Since the decision is based on my heuristics, it would be grateful if one can give any helpful advice either supporting or objecting the decision).
GAN์ ๊ด๋ จํด์๋ ์ด์ repo์ ์ค๋ช ํ์๊ธฐ ๋๋ฌธ์ ์๋ตํ๋๋ก ํ๊ฒ ์ต๋๋ค. ์ด์ repo์์๋ GAN์ ์ ๋๋ก ์ฌ์ฉํ์ง ๋ชปํ๋ ๊ฒ ๊ฐ์์, ์ด๋ฒ์ ๋ค์ ํ๋ฒ ์ ์ฉํ์ฌ ๋์ ๊ฒฐ๊ณผ๋ฅผ ๋์ถํ๊ณ ์ ํ์์ต๋๋ค. ์ ๊ฐ ์ฐธ๊ณ ํ ์ฝ๋๋ค์ (1)๊ณผ (2)์์ ์ฐธ์กฐํ์์ต๋๋ค. ๋ํ, GAN์ ์ด์ฉํ ๋ง์ colorization project๋ค์ด ResNet ํน์ U-Net ๊ตฌ์กฐ๋ฅผ ์ฌ์ฉํฉ๋๋ค. ๋ช๋ช ๊ฐ๋จํ ์คํ๋ค์ ํตํด, ๋ณธ colorization์์๋ U-Net ๊ตฌ์กฐ๊ฐ ์ข ๋ ๋์ ๊ฒ ๊ฐ์์ U-Net ๊ตฌ์กฐ๋ฅผ ์ฌ์ฉํ์์ต๋๋ค (์ ๊ฒฐ์ ์ ์ ๊ฐ์ธ์ ์ธ ํด๋ฆฌ์คํฑ์์ ๊ธฐ๋ฐํ ๊ฒฐ์ ์ด๊ธฐ ๋๋ฌธ์, ๊ฒฐ์ ๊ณผ ๊ด๋ จํด์ ์กฐ์ธ๋ค์ด ์์ผ์๋ค๋ฉด ์ธ์ ๋ ์ง ๋ง์ํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค).
DCGAN architecture: https://gluon.mxnet.io/chapter14_generative-adversarial-networks/dcgan.html)
U-Net architecture: https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net)
Below are inputs(grayscale) and outputs(colored) of the trained generator using GAN (epoch = 8,192):
์๋์ ์ด๋ฏธ์ง๋ค์ GAN ๋ชจ๋ธ๋ก ๋ง๋ generator์ input(ํ๋ฐฑ ์ด๋ฏธ์ง)์ output(์ฑ์ ์ด๋ฏธ์ง) ์ ๋๋ค (epoch = 8,192):
Also, as I said before, I tested one grayscale image with 25 different noises. Here, I prepared two different grayscale images - one existing in training dataset and one not existing in training dataset(The character I used for not existing is "Taylor" from BrownDust, a mobile game that I'm currently playing):
๋ํ ์ ๊ฐ ์ธ๊ธํ๋๋ก ํ๋์ ํ๋ฐฑ ์ด๋ฏธ์ง๋ฅผ 25๊ฐ์ง์ ๋ค๋ฅธ noise๋ค์ ์ฌ์ฉํ์ฌ ์ฑ์ํ๋ ์คํ์ ์งํํ์์ต๋๋ค. ๋ณธ ์คํ์์๋ ๋ ์ฅ์ ๋ค๋ฅธ ํ๋ฐฑ ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํ๋๋ฐ์, ํ๋๋ training set์ ์กด์ฌํ๋ ํ๋ฐฑ์ด๋ฏธ์ง๊ณ , ๋ค๋ฅธ ํ๋๋ ์ ๊ฐ ํ์ฌ ํ๋ ์ด์ค์ธ ๋ธ๋ผ์ด๋์คํธ ๊ฒ์์ ์บ๋ฆญํฐ์ธ "ํ ์ผ๋ฌ"์ ์ด๋ฏธ์ง์ ๋๋ค.
Well, not as well-colored as "chroma", but at least the generator gave me some different colorization results reasonable for me. For example, the generator detects facial parts (eyes, hair, mouth, etc) and colorizes them differently. Also, it's interesting for me to see the colorization result of Taylor, an example out of distribution, seems better than the in-distribution example. Now, I'm going to apply different GAN model called WGAN-GP.
์ ๊ฐ ์ํ๋ chroma ๊ธ์ ํ๋ฆฌํฐ๋ ์๋์ง๋ง... ๋ญ ์ ์ด๋ generator๊ฐ ์ดํด๊ฐ ๋๋ ๋ฒ์์ ๋ค์ํ ์์น ๊ฒฐ๊ณผ๋ฌผ์ ๋ด์ฃผ์๊ธฐ ๋๋ฌธ์ ๋ง์กฑํ๊ฒ ์ต๋๋ค. ์ผ๊ตด ๋ถ๋ถ๋ถ๋ถ(๋, ๋จธ๋ฆฌ, ์ ๋ฑ)์ ๊ฐ๊ฐ ๋ค๋ฅด๊ฒ ์์น ํ๋ ๊ฒ ์ธ์์ ์ด๋ค์. ๋ํ, training ๋ถํฌ์ ์ํด์์ง ์์ BrownDust์ ํ ์ผ๋ฌ ์ด๋ฏธ์ง๋ฅผ ๋ถํฌ์ ํฌํจ๋ ์ ๋๋ฉ์ด์ ์บ๋ฆญํฐ ์ด๋ฏธ์ง๋ณด๋ค ๋ ์ ์์น ๊ฒ ๊ฐ๋ค๋ ๋๋์ด ๋ค์ด ํฅ๋ฏธ๋ก์ ์ต๋๋ค. ์ด๋ฒ์๋ ๋ค๋ฅธ algorithm์ธ WGAN-GP์ ํ๋ฒ ์ฌ์ฉํ๊ณ ์ ํฉ๋๋ค.
(1) https://arxiv.org/abs/1701.07875
(2) https://vincentherrmann.github.io/blog/wasserstein/(/p>
(3) https://arxiv.org/pdf/1704.00028.pdf
As I mentioned in previous repo, WGAN(Wasserstein GAN) is one of those new versions by Arjovsky and Bottou (2017)(1), which applies Wasserstein loss instead of KL and JS divergence used for distance for the loss function in original GAN. In (1), the paper brings in the concept of weight clipping the weights in the discriminator in order to satisfy the Lipschitz constraint on the discriminator, a constraint that has to be satisfied in order to compute WGAN loss (For more information, visit (2) since I personally think it's so fat the best explanation I've read about the relationship between WGAN and Lipschitz constraint).
์ด์ repo์์ ์ธ๊ธํ๋ฏ์ด, WGAN์ Arjovsky์ Bottou๊ฐ KL๊ณผ JS Divergence์ ๊ธฐ์ดํ ๊ธฐ์กด์ GAN loss function์ด ์๋ Wasserstein loss ๊ฐ๋ ์ ์ ์ฉํ ์๋ก์ด ํํ์ GAN ์ ๋๋ค. ์ด ์๋ก์ด loss๋ฅผ ๊ณ์ฐํ๊ธฐ ์ํด์๋ WGAN์ discriminator๊ฐ Lipschitz constraint๋ฅผ ๋ง์กฑ์๊ฒจ์ผ ํ๋๋ฐ์, ์ด๋ฅผ ์ํด์ ๋ ผ๋ฌธ(1)์์ ์ ์๋ค์ discriminator์ weight๋ค์ ํน์ ๊ฐ์ผ๋ก clipํ๋ ๊ธฐ๋ฒ์ ์ฌ์ฉํฉ๋๋ค (์์ธํ ์ ์ (2)๋ฅผ ์ฐธ๊ณ ํด์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค. ์ ๊ฐ ์๊ฐํ์ ๋๋ ์ฌํ ์ฝ์๋ ์๋ฃ ์ค WGAN๊ณผ Lipschitz constraint์ ๊ด๊ณ ๊ฐ์ฅ ์ ์ค๋ช ํ ์๋ฃ๊ฐ ์๋๊น ์ถ์ต๋๋ค).
However, WGAN also contains some problems such as capacity underuse or exploding/vanishing gradient problem: It's quite obvious that the discriminator will not be optimal if one clips its weight values into certain clipping value. Also, WGAN itself is quite sensitive about the clipping value, so exploding/vanishing gradient problem can be easily occurred. Thus, a new technique of gradient penalty has been introduced in (3), which directly constrains the gradient norm of the discriminatorโs output with respect to its input (This is very brief explanation of WGAN-GP, so I recommend reading (3) in order to fully understand WGAN-GP). Interesting point to note is that the discriminator in WGAN-GP does not use BatchNormalization layer since batch normalization makes correlation among inputs of the layer. If there is a correlation among inputs of layer, the gradient norm of the discriminator's output with respect to its input will be changed.
ํ์ง๋ง, WGAN์ญ์ ์ฌ๋ฌ ๋ฌธ์ ๋ค์ ๊ฐ์ง๊ณ ์์ต๋๋ค. ๋ํ์ ์ธ ์๊ฐ capacity underuse์ exploding/vanishing gradient problem ์ ๋๋ค. Discriminator์ ํ์ต๋ weight๋ค์ ๋ค์ ๋ค๋ฅธ ๊ฐ์ผ๋ก clipping ์ํจ๋ค๋ฉด ๋น์ฐํ discriminator๊ฐ 100%์ ์ฑ๋ฅ์ ๋ด์ง ๋ชปํ๊ฒ ์ฃ . ๋ํ, WGAN์ ์ด clipping ๊ฐ์ ๊ต์ฅํ ๋ฏผ๊ฐํ๋ฏ๋ก clipping ๊ฐ์ผ๋ก ์ธํ exploding/vanishing gradient problem์ด ์ฝ๊ฒ ์ผ์ด๋๋ค๊ณ ํฉ๋๋ค. ๋ฐ๋ผ์, ์๋ก์ด ๊ธฐ๋ฒ์ธ gradient penalty๊ฐ ๋ ผ๋ฌธ(3)์ ์๊ฐ๋์์ต๋๋ค. ์ด ๊ธฐ๋ฒ์ ๊ธฐ์กด discriminator์ weight๋ค์ clippingํ๋ ๋์ input์ ๋ํ discrimintor output์ gradient norm์ ์ง์ ์ ์ฝ์ ์ค์ผ๋ก์จ Lipschitz constraint๋ฅผ ๋ง์กฑํ๋ ๋ฐฉ๋ฒ์ ์ฌ์ฉํฉ๋๋ค(์ ์ค๋ช ์ WGAN-GP์ ๋ํ ๊ต์ฅํ ๊ฐ๋ตํ ์ค๋ช ์ด๋ฏ๋ก, WGAN-GP๋ฅผ ์จ์ ํ ์ดํดํ๊ธฐ ์ํด์๋ ๋ ผ๋ฌธ(3)์ ์ฝ๋ ๊ฒ์ ์ถ์ฒ๋๋ฆฝ๋๋ค). WGAN-GP์์ ํน์ดํ ์ ์, discrimintor์ BatchNormalization layer๋ฅผ ์ฌ์ฉํ์ง ์๋๋ค๋ ์ ์ธ๋ฐ์, ์ด๋ batch normalization์ input ๊ฐ์ correlation์ ์์ฑํ๊ธฐ ๋๋ฌธ์ input์ ๋ํ discriminator output์ gradient norm์ ๊ณ์ฐํด์ผ ํ๋ WGAN-GP ๊ธฐ๋ฒ์๋ ์ด๊ธ๋๋ค๊ณ ํด์ ์ฌ์ฉํ์ง ์์๋ค๊ณ ํฉ๋๋ค.
Difference between WGAN weight-clipping and gradient penalty (https://arxiv.org/pdf/1704.00028.pdf)
Below are inputs(grayscale) and outputs(colored) of the trained generator using WGAN-GP (epoch = 2,048):
์๋์ ์ด๋ฏธ์ง๋ค์ WGAN-GP ๋ชจ๋ธ๋ก ๋ง๋ generator์ input(ํ๋ฐฑ ์ด๋ฏธ์ง)์ output(์ฑ์ ์ด๋ฏธ์ง) ์ ๋๋ค (epoch = 2,048):
Also, same as what I did for GAN, I tested one grayscale image with 25 different noises:
๋ํ, GAN๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก ํ๋์ ํ๋ฐฑ ์ด๋ฏธ์ง๋ฅผ 25๊ฐ์ง์ ๋ค๋ฅธ noise๋ค์ ์ฌ์ฉํ์ฌ ์ฑ์ ๊ฒฐ๊ณผ์ ๋๋ค:
It seems to me that the quality of colorization gets better after using WGAN-GP, but I cannot "quantify" how much the result is improved from GAN result. Still, it was worthwhile for me to run WGAN-GP codes and get comparably decent result for colorization.
๋์ผ๋ก๋ง ๋ดค์ ๋๋ WGAN-GP๋ฅผ ์ด์ฉํ์ฌ ๋ง๋ generator๊ฐ ์ข ๋ ์ข์ ๊ฒฐ๊ณผ๋ฅผ ๋ด๋ ๊ฒ ๊ฐ์ง๋ง, GAN๊ณผ ๋น๊ตํด์ ์ผ๋ง๋ ์ข์์ก๋์ง "์์นํ"๋ฅผ ์ํฌ ์๊ฐ ์์์ต๋๋ค. ํ์ง๋ง, ๊ทธ๋ผ์๋ ๋ถ๊ตฌํ๊ณ WGAN-GP ์ฝ๋๋ฅผ ๋๋ ค๋ณด๊ณ ๋น๊ต์ ๊ด์ฐฎ์ ์ฑ์ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์๋ค๋ ์ ์ด ๊ฐ์น๊ฐ ์์์ต๋๋ค.
So far, I've done coloriztaion of grayscale image to color image. After I've been training algorithms many times with different parameters it seems that WGAN-GP generally seems to produce better results than GAN. However, WGAN-GP is quite slow, and sometimes GAN also produces seemingly better results! It's also ambiguous to define "better colorization", so I've learned that professional knowledge regarding colorization is also needed for the project. Also, there are some codes needed to be improved: For example, to apply RandomWeightedAverage, I gave global parameters (batch_size, img_shape_d), which needs to be changed whenever the size of batch is changed :(:(
ํ๋ฐฑ ์ด๋ฏธ์ง๋ฅผ ์์น ํด๋ณด๋ ๊ณผ์ ๋ฅผ ๋ง์ณค์ต๋๋ค. ์ฌ๋ฌ๊ฐ์ algorithm์ ๋ค๋ฅธ parameter ๊ฐ๋ค๋ก ํด๋ณธ ๊ฒฐ๊ณผ, WGAN-GP๊ฐ GAN๋ณด๋ค ๋ณด๊ธฐ์ ์ข ๋ ๋์ ๊ฒฐ๊ณผ๋ฅผ ๋์ถํ๋ ๊ฒ์ ํ์ธํ ์ ์์์ต๋๋ค. ํ์ง๋ง, WGAN-GP๋ GAN์ ๋นํด ํ์ต์๋๊ฐ ๋๋ฆฌ๊ณ , ๋๋๋ก GAN์ด ๋ ์ข์ ๊ฒฐ๊ณผ๋ฅผ ๋ผ ๋๋ ์์์ต๋๋ค. ๊ทธ๋ฆฌ๊ณ "์ด๋ค ์ด๋ฏธ์ง๊ฐ ์ข ๋ ์ ์์น ๋์๋๊ฐ?" ๋ผ๋ ์ง๋ฌธ์ ๋ํ ๋ต์ ๊ต์ฅํ ์ ๋งคํด์, ์ด๋ฏธ์ง ์์น ๊ณผ ๊ด๋ จ๋ ๋๋ฉ์ธ ์ง์์ด ํ์ํ๋ค๋ ์ฌ์ค๋ ๊นจ๋ฌ์์ต๋๋ค. ๋ง์ง๋ง์ผ๋ก, ์ฝ๋ ๋ถ๋ถ์์ ์์ง ๋ถ์กฑํ ๋ถ๋ถ์ด ๋ง์ ๊ฒ ๊ฐ์ต๋๋ค. ์๋ฅผ ๋ค์ด, WGAN-GP๋ฅผ ์ํ RandomWeightAverage ํจ์๋ฅผ ์ ์ฉํ๊ธฐ ์ํด batch_size, img_shape_d๋ฅผ global parameter๋ก ์ ์ํ๋๋ฐ, ์ด๋ฌ๋ฉด batch_size๊ฐ ๋ณํ ๋๋ง๋ค ๊ฐ์ ๋ฐ๊ฟ์ฃผ์ด์ผ ํ๊ธฐ ๋๋ฌธ์ ๋นํจ์จ์ ์ด๋ ์๊ฐ์ด ๋ญ๋๋ค ใ ใ
Special thanks to AI Research Lab in Neowiz Play Studio (http://neowizplaystudio.com/ko/) that allowed me to use resources for the project. If you need a white-background dataset, please send an e-mail to the address given below contact information.
์ ํ๋ก์ ํธ ์งํ์ ์ํ ๋ฆฌ์์ค๋ฅผ ์ฌ์ฉํ๋๋ก ํ๋ฝํด์ฃผ์ ๋ค์ค์์ฆํ๋ ์ด์คํ๋์ค ๋ด์ AI์ฐ๊ตฌ์์๊ฒ ๊ฐ์ฌ๋๋ฆฝ๋๋ค. ํน์ ์ ๊ฐ ์ฌ์ฉํ ํ์์ ๋ฐฐ๊ฒฝ์ ์บ๋ฆญํฐ ์ด๋ฏธ์ง๊ฐ ํ์ํ์ ๋ถ๋ค์ด ์๋ค๋ฉด ์๋ ์ด๋ฉ์ผ ์ฃผ์๋ก ๋ฌธ์์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค.
facebook: https://www.facebook.com/dabin.moon.7
email: dabsdamoon@neowiz.com