Skip to content

Commit 18484a0

Browse files
authored
Update README.md
1 parent ed72839 commit 18484a0

File tree

1 file changed

+20
-17
lines changed

1 file changed

+20
-17
lines changed

README.md

Lines changed: 20 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -55,28 +55,13 @@ pip install -r requirements.txt
5555
## 📚 Dataset Preparation and Mask Generations
5656
Refer to [preprocess/README.md](./preprocess/README.md) file.
5757

58-
## 🚀 MaskGAN Training and Testing
59-
- Sampled training script is provided in train.sh
60-
- Modify image augmentations as needed `--load_size` (resize one dimension to be a fixed size), `--pad_size` (pad both dimensions to an equal size), `--crop_size` (crop both dimensions to an equal size).
61-
- Train a model:
62-
- `lambda_mask` and `lambda_shape` specify hyper-parameters of our proposed mask loss and shape consistency loss.
63-
- `opt_level` specifies Apex mixed-precision optimization level. The default is `O0` which is full FP32 training. If low GPU memory, you can use O1 or O2 for mixed precision training.
64-
- Training command:
65-
```
66-
python train.py --dataroot dataroot --name exp_name --gpu_ids 0 --display_id 0 --model mask_gan --netG att
67-
--dataset_mode unaligned --pool_size 50 --no_dropout
68-
--norm instance --lambda_A 10 --lambda_B 10 --lambda_identity 0.5 --lambda_mask 1.0 --lambda_shape 0.5 --load_size 150 --pad_size 225 --crop_size 224 --preprocess resize_pad_crop --no_flip
69-
--batch_size 4 --niter 40 --niter_decay 40 --display_freq 1000 --print_freq 1000 --n_attentions 5
70-
```
71-
- For your own experiments, you might want to specify --netG, --norm. Our mask generators `netG` are `att` and `unet_att`.
72-
- To continue model training, append `--continue_train --epoch_count xxx` on the command line.
73-
- Test the model:
58+
## 🚀 Model Testing
7459
```
7560
python test.py --dataroot dataroot --name exp_name --gpu_ids 0 --model mask_gan --netG att
7661
--dataset_mode unaligned --no_dropout --load_size 150 --pad_size 225 --crop_size 224 --preprocess resize_pad_crop --no_flip
7762
--batch_size 4
7863
```
79-
- The results will be saved at `./results/exp_name`. Use `--results_dir {directory_path_to_save_result}` to specify the results directory. There will be four folders `fake_A`, `fake_B`, `real_A`, `real_B` created in `results`.
64+
The results will be saved at `./results/exp_name`. Use `--results_dir {directory_path_to_save_result}` to specify the results directory. There will be four folders `fake_A`, `fake_B`, `real_A`, `real_B` created in `results`.
8065

8166
## 💾 Use of pretrained weights
8267

@@ -97,6 +82,24 @@ python evaluation.py --results_folder exp_name
9782

9883
Results for MRI-to-CT synthesis generation and CT-to-MRI are shown.
9984

85+
## 🚀 MaskGAN Training
86+
- Sampled training script is provided in train.sh
87+
- Modify image augmentations as needed `--load_size` (resize one dimension to be a fixed size), `--pad_size` (pad both dimensions to an equal size), `--crop_size` (crop both dimensions to an equal size).
88+
- Train a model:
89+
- `lambda_mask` and `lambda_shape` specify hyper-parameters of our proposed mask loss and shape consistency loss.
90+
- `opt_level` specifies Apex mixed-precision optimization level. The default is `O0` which is full FP32 training. If low GPU memory, you can use O1 or O2 for mixed precision training.
91+
- Training command:
92+
```
93+
python train.py --dataroot dataroot --name exp_name --gpu_ids 0 --display_id 0 --model mask_gan --netG att
94+
--dataset_mode unaligned --pool_size 50 --no_dropout
95+
--norm instance --lambda_A 10 --lambda_B 10 --lambda_identity 0.5 --lambda_mask 1.0 --lambda_shape 0.5 --load_size 150 --pad_size 225 --crop_size 224 --preprocess resize_pad_crop --no_flip
96+
--batch_size 4 --niter 40 --niter_decay 40 --display_freq 1000 --print_freq 1000 --n_attentions 5
97+
```
98+
- For your own experiments, you might want to specify --netG, --norm. Our mask generators `netG` are `att` and `unet_att`.
99+
- To continue model training, append `--continue_train --epoch_count xxx` on the command line.
100+
101+
102+
100103

101104
## 📜 Citation
102105
If you use this code for your research, please cite our papers.

0 commit comments

Comments
 (0)