You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,11 +24,11 @@ In order to implement multi-label classification, I modify (add) the following f
24
24
8. ./timm/models/multi_label_model.py (add)
25
25
26
26
**In order to train your own dataset, you only need to modify the 1, 2, 4, 8 files.** <br>
27
-
Simply modify the code between double dashed lines, or search color/gender/article, that’s the code/label that you need to change.
27
+
Simply modify the code between the double dashed lines, or search color/gender/article, that’s the code/label that you need to change.
28
28
29
29
In terms of backbones, I only modified ./timm/models/efficientnet.py, I add an as_sequential_for_ML method. <br>
30
30
For other models, you need to define the as_sequential_for_ML method yourself within each class. It’s simply a part of the as_sequential method. <br>
31
-
We only need the backbone at this moment, so remove the last layers, for example classifier layer, from as_sequential method (see forward_features method, then you would know which layers you need to remove), then you will get as_sequential_for_ML method. (But note that not all models have as_sequential method.)
31
+
We only need the backbone at this moment, so remove the last layers, for example classifier layer, from the as_sequential method. See forward_features method in each model class, then you would know which layers you need to remove, or how to define the as_sequential_for_ML method.
32
32
33
33
In addition, besides the multi-label classification functionality, I also add gradient centralization within AdamP optimizer. <br>
34
34
[Gradient centralization](https://github.com/Yonghongwei/Gradient-Centralization) is a simple technique and may improve the optimizer performance. <br>
@@ -52,7 +52,8 @@ And a command example to start to validate: <br>
0 commit comments