I am using sklearn's MLPRegressor. Here's what I am interested in knowing:
- What are the most important hyperparameters to focus on tuning?
- What are the suitable ranges of values for each hyperparameter?
- What is the expected results for each hyperparameter? (e.g. if we have a neural network architecture with more nodes we might expect increase accuracy - I guess this comment is more about NN architecture than hyperparameter tuning, do you tune hidden layer sizes in the same way that you tune other hyperparameters?)
- For which hyperparameters will tuning potentially increase our chance of overfitting/underfitting?