1

I trained a model on Colab for my final year project EfficientNetB0. After all the layer training, I tested it and its result was excellent, but now I want to integrate the model to the frontend web app backed by python Flask. For integration I used AI but I am facing the one error:

Bone model not loaded: Input 0 of layer "stem_conv" is incompatible with the layer: expected axis -1 of input shape to have value 3, but received input with shape (None, 385, 385, 1)

I tried AI help from last 2 days. Followed Different approaches suggested by AI but couldn't get the results.

Solutions/Approaches I followed:

  1. Modifications to model loading and image preprocessing code
  2. Patched the currently trained model : Tried to wrap image preprocessing inside model
  3. Inspected frontend libraries
  4. and multiple ones I cant remember others

My main evil err is:

Input Shape Mismatch (384 vs 385, grayscale vs RGB)

Problem: Model expects (384,384,3), but some inputs were (385,385,1) or grayscale

Model output on Colab: screenshot of an X-ray side-by-side with its predicted segmentation mask

Model error on frontend: screenshot of an X-ray with an error

Right now I haven't tried to retrain the model using modifications. I don't know if it will give same output as this one after those modifications. My expected result was to achieve the same output received on Colab.

4
  • 2
    It may be a bit easier to help you if we see your frontend code at least and ideally the code showing inputs to collab and your front-end to determine how the front-end is handling this differently. From what you are sharing the model is expecting a 3 dimensional RGB input of 384 pixels but you are feeding it a 1 dimensional grayscale 385 pixel input. so what you can do is validate your input in your algorithm and using Pillow to convert it to RGB if it is not RGB while also resizing the image to 384 pixels. Commented Sep 30 at 3:14
  • 2
    If the model expects color images, you should transform the grayscale images to color, as in duplicating channels, simple as that. Commented Sep 30 at 5:55
  • drive.google.com/file/d/1d4gB2ftQkGt5OPfkWjYgWvm78eOdtdq_/… above is the link of backend code in txt format used to integrate the model with frontend. Having image preprocessing model loading , segmentation and other funtions Commented Sep 30 at 19:06
  • We do not take code as external links, you have already been told what the problem is. Commented Oct 5 at 17:36

1 Answer 1

0

I suspect your problem is here:

        # Capture input signature
        try:
            ish = getattr(_model, 'input_shape', None)
            if isinstance(ish, (list, tuple)):
                if isinstance(ish[0], (list, tuple)):
                    h = ish[0][1]
                    w = ish[0][2]
                    c = ish[0][3]
                else:
                    h = ish[1]
                    w = ish[2]
                    c = ish[3]
                if isinstance(h, int) and isinstance(w, int) and h == w:
                    _input_size = h
                if isinstance(c, int):
                    _input_channels = c
        except Exception:
            pass

For whatever reason the model is giving you 385 and 1 for size and channels respectively it seems when you take the value from the 'input_shape' key.

Which then is being propagated to your pre-processing:

def preprocess(image_path: str) -> Tuple[np.ndarray, np.ndarray]:
    img = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
    if img is None:
        raise FileNotFoundError(f"Could not read image at {image_path}")
    size = int(_input_size) if isinstance(_input_size, int) and _input_size > 0 else IMG_SIZE
    orig_resized = cv2.resize(img, (size, size)).astype(np.float32)
    ch = int(_input_channels) if isinstance(_input_channels, int) and _input_channels > 0 else 3
    if ch == 1:
        inp = np.expand_dims(orig_resized, axis=-1)
    else:
        inp = np.stack([orig_resized, orig_resized, orig_resized], axis=-1)
    inp = (inp / 127.5) - 1.0
    inp = np.expand_dims(inp, 0)
    return inp, orig_resized

For a quick fix you can probably just hardcode the dimensions since it seems your model is already in a finished state and the dims aren't likely to change.

size = IMG_SIZE
ch = 3
Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.