Skip to content

Commit 472866f

Browse files
authored
update version under TensorLayerX (#242)
1 parent bc41916 commit 472866f

File tree

8 files changed

+819
-573
lines changed

8 files changed

+819
-573
lines changed

README.md

Lines changed: 91 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,26 @@
11
## Super Resolution Examples
22

3+
🔥🔥🔥🔥🔥🔥**Now, we update this script under [TensorLayerX](https://github.com/tensorlayer/TensorLayerX)! For earlier version, please check [srgan release](https://github.com/tensorlayer/srgan/releases) and [tensorlayer](https://github.com/tensorlayer/TensorLayer).**
34

45

5-
6-
We run this script under [TensorFlow](https://www.tensorflow.org) 2.0 and the [**TensorLayer2.0+**](https://github.com/tensorlayer/tensorlayer). For TensorLayer 1.4 version, please check [release](https://github.com/tensorlayer/srgan/releases).
7-
8-
<!---
9-
⚠️ This repo will be merged into example folder of [tensorlayer](https://github.com/zsdonghao/tensorlayer) soon.
10-
-->
11-
🚀🚀🚀🚀🚀🚀 **THIS PROJECT WILL BE CLOSED AND MOVED TO [THIS FOLDER](https://github.com/tensorlayer/tensorlayer/tree/master/examples) IN A MONTH.**
12-
13-
🚀🚀🚀🚀🚀🚀 **THIS PROJECT WILL BE CLOSED AND MOVED TO [THIS FOLDER](https://github.com/tensorlayer/tensorlayer/tree/master/examples) IN A MONTH.**
14-
15-
🚀🚀🚀🚀🚀🚀 **THIS PROJECT WILL BE CLOSED AND MOVED TO [THIS FOLDER](https://github.com/tensorlayer/tensorlayer/tree/master/examples) IN A MONTH.**
16-
17-
<!--More cool Computer Vision applications such as pose estimation and style transfer can be found in this [organization](https://github.com/tensorlayer).**
18-
-->
19-
206
### SRGAN Architecture
217

22-
TensorFlow Implementation of ["Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"](https://arxiv.org/abs/1609.04802)
8+
TensorLayerX Implementation of ["Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network"](https://arxiv.org/abs/1609.04802)
239

24-
<a href="http://tensorlayer.readthedocs.io">
10+
<a href="https://tensorlayerx.readthedocs.io/en/latest/">
2511
<div align="center">
2612
<img src="img/model.jpeg" width="80%" height="10%"/>
2713
</div>
2814
</a>
29-
30-
31-
### Results
32-
33-
<a href="http://tensorlayer.readthedocs.io">
34-
<div align="center">
35-
<img src="img/SRGAN_Result2.png" width="80%" height="50%"/>
36-
</div>
37-
</a>
38-
39-
<a href="http://tensorlayer.readthedocs.io">
15+
<a href="https://tensorlayerx.readthedocs.io/en/latest/">
4016
<div align="center">
4117
<img src="img/SRGAN_Result3.png" width="80%" height="50%"/>
4218
</div>
4319
</a>
4420

4521
### Prepare Data and Pre-trained VGG
4622

47-
- 1. You need to download the pretrained VGG19 model in [here](https://github.com/tensorlayer/pretrained-models/tree/master/models) as [tutorial_models_vgg19.py](https://github.com/tensorlayer/tensorlayer/blob/master/examples/pretrained_cnn/tutorial_models_vgg19.py) show.
23+
- 1. You need to download the pretrained VGG19 model weights in [here](https://github.com/tensorlayer/pretrained-models/tree/master/models).
4824
- 2. You need to have the high resolution images for training.
4925
- In this experiment, I used images from [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/), so the hyper-paremeters in `config.py` (like number of epochs) are seleted basic on that dataset, if you change a larger dataset you can reduce the number of epochs.
5026
- If you dont want to use DIV2K dataset, you can also use [Yahoo MirFlickr25k](http://press.liacs.nl/mirflickr/mirdownload.html), just simply download it using `train_hr_imgs = tl.files.load_flickr25k_dataset(tag=None)` in `main.py`.
@@ -53,34 +29,106 @@ TensorFlow Implementation of ["Photo-Realistic Single Image Super-Resolution Usi
5329

5430

5531
### Run
32+
33+
#### Train
5634
- Set your image folder in `config.py`, if you download [DIV2K - bicubic downscaling x4 competition](http://www.vision.ee.ethz.ch/ntire17/) dataset, you don't need to change it.
5735
- Other links for DIV2K, in case you can't find it : [test\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_test_LR_bicubic_X4.zip), [train_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip), [train\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_LR_bicubic_X4.zip), [valid_HR](https://data.vision.ee.ethz.ch/cvl/DIV2K/validation_release/DIV2K_valid_HR.zip), [valid\_LR\_bicubic_X4](https://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_valid_LR_bicubic_X4.zip).
5836

5937
```python
6038
config.TRAIN.img_path = "your_image_folder/"
6139
```
40+
Your directory structure should look like this:
41+
42+
```
43+
srgan/
44+
└── config.py
45+
└── srgan.py
46+
└── train.py
47+
└── vgg.py
48+
└── model
49+
└── vgg19.npy
50+
└── DIV2K
51+
└── DIV2K_train_HR
52+
├── DIV2K_train_LR_bicubic
53+
├── DIV2K_valid_HR
54+
└── DIV2K_valid_LR_bicubic
55+
56+
```
6257

6358
- Start training.
6459

6560
```bash
6661
python train.py
6762
```
6863

69-
- Start evaluation.
64+
🔥Just modify a line of code in **train.py**, easily change to any framework!
65+
66+
```python
67+
import os
68+
os.environ['TL_BACKEND'] = 'tensorflow'
69+
# os.environ['TL_BACKEND'] = 'mindspore'
70+
# os.environ['TL_BACKEND'] = 'paddle'
71+
```
72+
🚧 We will support PyTorch as Backend soon.
73+
74+
75+
#### Evaluation.
76+
77+
🔥 We have trained SRGAN on DIV2K dataset.
78+
🔥 Download model weights as follows.
79+
80+
| | SRGAN_g | SRGAN_d |
81+
|------------- |---------|---------|
82+
| TensorFlow | [Baidu](https://pan.baidu.com/s/118uUg3oce_3NZQCIWHVjmA?pwd=p9li), [Googledrive](https://drive.google.com/file/d/1GlU9At-5XEDilgnt326fyClvZB_fsaFZ/view?usp=sharing) |[Baidu](https://pan.baidu.com/s/1DOpGzDJY5PyusKzaKqbLOg?pwd=g2iy), [Googledrive](https://drive.google.com/file/d/1RpOtVcVK-yxnVhNH4KSjnXHDvuU_pq3j/view?usp=sharing) |
83+
| PaddlePaddle | [Baidu](https://pan.baidu.com/s/1ngBpleV5vQZQqNE_8djDIg?pwd=s8wc), [Googledrive](https://drive.google.com/file/d/1GRNt_ZsgorB19qvwN5gE6W9a_bIPLkg1/view?usp=sharing) | [Baidu](https://pan.baidu.com/s/1nSefLNRanFImf1DskSVpCg?pwd=befc), [Googledrive](https://drive.google.com/file/d/1Jf6W1ZPdgtmUSfrQ5mMZDB_hOCVU-zFo/view?usp=sharing) |
84+
| MindSpore | 🚧Coming soon! | 🚧Coming soon! |
85+
| PyTorch | 🚧Coming soon! | 🚧Coming soon! |
7086

71-
<!--([pretrained model](https://github.com/tensorlayer/srgan/releases/tag/1.2.0) for DIV2K)-->
7287

88+
Download weights file and put weights under the folder srgan/models/.
89+
90+
Your directory structure should look like this:
91+
92+
```
93+
srgan/
94+
└── config.py
95+
└── srgan.py
96+
└── train.py
97+
└── vgg.py
98+
└── model
99+
└── vgg19.npy
100+
└── DIV2K
101+
├── DIV2K_train_HR
102+
├── DIV2K_train_LR_bicubic
103+
├── DIV2K_valid_HR
104+
└── DIV2K_valid_LR_bicubic
105+
└── models
106+
├── g.npz # You should rename the weigths file.
107+
└── d.npz # If you set os.environ['TL_BACKEND'] = 'tensorflow',you should rename srgan-g-tensorflow.npz to g.npz .
108+
109+
```
110+
111+
- Start evaluation.
73112
```bash
74-
python train.py --mode=evaluate
113+
python train.py --mode=eval
75114
```
76115

116+
Results will be saved under the folder srgan/samples/.
117+
118+
### Results
119+
120+
<a href="http://tensorlayer.readthedocs.io">
121+
<div align="center">
122+
<img src="img/SRGAN_Result2.png" width="80%" height="50%"/>
123+
</div>
124+
</a>
125+
77126

78127
### Reference
79128
* [1] [Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network](https://arxiv.org/abs/1609.04802)
80129
* [2] [Is the deconvolution layer the same as a convolutional layer ?](https://arxiv.org/abs/1609.07009)
81130

82-
### Author
83-
- [zsdonghao](https://github.com/zsdonghao)
131+
84132

85133
### Citation
86134
If you find this project useful, we would be grateful if you cite the TensorLayer paper:
@@ -93,6 +141,15 @@ title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Developme
93141
url = {http://tensorlayer.org},
94142
year = {2017}
95143
}
144+
145+
@inproceedings{tensorlayer2021,
146+
title={TensorLayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
147+
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
148+
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
149+
pages={1--3},
150+
year={2021},
151+
organization={IEEE}
152+
}
96153
```
97154

98155
### Other Projects

config.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,7 @@
33

44
config = edict()
55
config.TRAIN = edict()
6-
7-
## Adam
8-
config.TRAIN.batch_size = 8 # [16] use 8 if your GPU memory is small, and use [2, 4] in tl.vis.save_images / use 16 for faster training
6+
config.TRAIN.batch_size = 16 # [16] use 8 if your GPU memory is small
97
config.TRAIN.lr_init = 1e-4
108
config.TRAIN.beta1 = 0.9
119

img/SRGAN Result.pptx

15.4 MB
Binary file not shown.

0 commit comments

Comments
 (0)