Sklearn: Typeerror: Fit() Missing 1 Required Positional Argument: 'X"

TypeError: fit() missing 1 required positional argument: 'y' (using sklearn - ExtraTreesRegressor)

The problem is here

etr = ensemble.ExtraTreesRegressor
etr.fit(x_train, y_train)

You need to instantiate ensemble.ExtraTreesRegressor before calling fit on it. Change this code to

etr = ensemble.ExtraTreesRegressor()
etr.fit(x_train, y_train)

You get the seemingly strange error that y is missing because .fit is an instance method, so the first argument to this function is actually self. When you call .fit on an instance, self is passed automatically. If you call .fit on the class (as opposed to the instance), you would have to supply self. So your code is equivalent to ensemble.ExtraTreesRegressor.fit(self=x_train, x=y_train).

For an example of the difference, please see the example below. The two forms are functionally equivalent, but you can see that the first form is clunky.

from sklearn import ensemble

# Synthetic data.
x = [[0]]
y = [1]

myinstance = ensemble.ExtraTreesRegressor()
ensemble.ExtraTreesRegressor.fit(myinstance, x, y)

etr = ensemble.ExtraTreesRegressor()
etr.fit(x, y)

StandardScaler: TypeError: fit() missing 1 required positional argument: 'X'

You have to write

scaler = StandardScaler() 

You forgot the parenthesis

sklearn: TypeError: fit() missing 1 required positional argument: 'x"

According to this Scikit-learn module, the correct syntax should be:

imputer.fit(X[:,1:3])

instead of:

imputer = SimpleImputer.fit(X[:,1:3])

scikit-learn - TypeError: fit() missing 1 required positional argument: 'y'

You have already fitted the training data in classifier.fit(X_train,Y_train). classifier being your model, now you want to predict the y values (y_pred) for the test X data. Hence what you need to do is

y_pred = classifier.predict(X_test)

But what you are doing is

y_pred = classifier.fit(X_test)

Hence you are getting the error fit() missing 1 required positional argument: 'y' because while fitting you also need the dependent variable y here.

Just replace .fit by .predict in the above mentioned line.

"fit() missing 1 required positional argument: 'y'" error

Try

from sklearn.model_selection import train_test_split

X = df[['Avg. Area Income', 'Avg. Area House Age', 'Avg. Area Number of Rooms',
'Avg. Area Number of Bedrooms', 'Area Population']]

y = df['Price']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=101)

from sklearn.linear_model import LinearRegression

lm = LinearRegression()

lm.fit(X_train,y_train)

you forgot () after lm = LinearRegression

TypeError: fit() missing 1 required positional argument: 'y',

all_estimators does not return instances of estimators but only their classes (see the documentation). When defining the pipeline, you should instantiate an object of that class:

for name, estimator in sklearn.utils.all_estimators(type_filter='regressor'):
model = make_pipeline(StandardScaler(), estimator()) # <-- change in this line

Note the () after estimator. Now you have actual objects that can be fitted to data.


Concerning the except block: by default, cross_validate will just assign np.nan to the score if an error occurs. To actually raise the error, set error_score='raise' in cross_validate:

scores =  cross_validate(model, X, y, scoring='r2', error_score='raise')

fit() missing 1 required positional argument: 'self'

The problem is that you are creating the model here svc = SVC(kernel = "poly"), but you're calling the fit with a non-instantiable model.

You must change the object to:

svc_model = SVC(kernel = "poly")
svc_model.fit(X=Xtrain, y=ytrain)
predictions = svc_model.predict(Xtest)

I suggest you to Indique the test size, normally the best practice is with 30% for test and 70% for training. So you can indicate.

Xtrain, Xtest, ytrain, ytest = train_test_split(X,y,test_size=0.30, random_state=42)


Related Topics



Leave a reply



Submit