Transfer Learning for small size images | Coursera Community
Coursera Header

Transfer Learning for small size images


Userlevel 3
Badge +4
Most of the CNN architectures like Alex-net , Resnet , /inception net take 224 x 224 size images as their input . Is there any popular opensource CNN architectures to do transfer learning for images of small size like MNIST images ? I know about LeNet architecture . Is there are any other architecture similar to it . I need weights of such opensource implementation to do transfer learning .

2 replies

Userlevel 3
Badge +3
Why not tell us the problem you are trying to solve rather than focusing on the details.
(https://en.wikipedia.org/wiki/XY_problem)

Is the problem that the images you want to train on are too small for the Architectures mentioned? If so, have you thought about using Data Augmentation techniques to 'blow up' your images to the "right size"?
Badge
I spent some time researching and replicating CV opensources including
  • Google Object Detection API
  • Facebook Detectron
  • Dozen of opensource CNN repositories, mostly from original paper authors, researchers etc.
  • Code base from all the Deep Learning Specialization I completed
And found that you can do pretty good transfer learning as small number as 60 images to train a large enough model, getting almost 99% accuracy, in case you are not looking for a Spiderman to be recognized, but simpler object, e.g. a certain type of precious stone.

Google Object Detection API is the worst I get on my list not because of the quality but the strategy they try to push people to use GCP on every path I test its materials, it is just a complete waste of my time. It is not truly opensource. CPU-based execute is fine, but I never GPU-speed with a platform that is already functional GPU, CUDA 8.0/9.0, Ubuntu platform that ran perfectly ok with all kinds of tensorflow, pytorch and caffe2 programs. I am sure I can go into the somewhere in configuration or code itself to turn that switch on but they are obviously trying to avoid people use GPU i.m.o. Because they are competing every bit against Amazon and Microsoft to put Data Scientist to use this API on their cloud service, which is at the moment behind Amazon.

Facebook Detectron is a really good and complete opensource, but so good that it is a bit too complex to read all the source code in a short period of time, especially the modular architecture are unnecessary complex, less efficient, for instance, the generalized_rcnn component trying to put all types of Regional-CNN into one model. Anyway it is a really good one.

Reply

    Cookie policy

    We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

    Accept cookies Cookie settings