Class List Keeps Printing Out as Class Name in Console

Class List Keeps Printing Out As Class Name In Console?

You should override ToString() for your class in format as you want, for example like this:

public class SharePrices
{
public DateTime theDate { get; set; }
public decimal sharePrice { get; set; }

public override string ToString()
{
return String.Format("The Date: {0}; Share Price: {1};", theDate, sharePrice);
}
}

By default, without overriding, ToString() returns a string that represents the current object. So that's why you get what you described.

How to fix Console Writing the name of the Class?

I added an override of ToString so that your Point class has the expected output when converted to string. An output like "x:3 y:4".

class Point
{
public int x { get; private set; }
public int y { get; private set; }

public Point(int x2, int y2)
{
x = x2;
y = y2;
}

public override string ToString()
{
return $"x:{x,-3} y:{y,-3}";
}
}

As it is now, it is a good candidate for becoming a struct instead of class.

Why writing items to console writes only namespace and class name instead of data?

You are passing object to the Console.WriteLine(item) instead of passing the string. Console.WriteLine invokes ToString() method of that object that by default returns namespace+class name. You can override this behavior like next:

    public class City //class
{
public string CityName { get; set; }
public int Temperature { get; set; }

public City(string name, int temp)//konstruktor
{
this.CityName = name;
this.Temperature = temp;
}

public override string ToString()
{
return string.Format("{0} {1}", CityName, Temperature);
}

}

Or you can use another overload of WriteLine method:

Console.WriteLine("{0} {1}", item.CityName, item.Temperature);

How to print instances of a class using print()?

>>> class Test:
... def __repr__(self):
... return "Test()"
... def __str__(self):
... return "member of Test"
...
>>> t = Test()
>>> t
Test()
>>> print(t)
member of Test

The __str__ method is what gets called happens when you print it, and the __repr__ method is what happens when you use the repr() function (or when you look at it with the interactive prompt).

If no __str__ method is given, Python will print the result of __repr__ instead. If you define __str__ but not __repr__, Python will use what you see above as the __repr__, but still use __str__ for printing.

How to print out all the elements of a List in Java?

Here is some example about getting print out the list component:

public class ListExample {

public static void main(String[] args) {
List<Model> models = new ArrayList<>();

// TODO: First create your model and add to models ArrayList, to prevent NullPointerException for trying this example

// Print the name from the list....
for(Model model : models) {
System.out.println(model.getName());
}

// Or like this...
for(int i = 0; i < models.size(); i++) {
System.out.println(models.get(i).getName());
}
}
}

class Model {

private String name;

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}
}

Trying to print class names for dog breed but it keeps saying list index out of range

I suppose you have several issues that can be fixed using 13 chars.

First, I suggest what @Alekhya Vemavarapu suggested - run your code with a debugger to isolate each line and inspect the output. This is one of the greatest benefits of dynamic graphs with pytorch.

Secondly, the most probable cause for your issue is the argmax statement that you use incorrectly. You do not specify the dimension you perform the argmax on, and therefore PyTorch automatically flattens the image and perform the operation on the full length vector. Thus, you get a number between 0 and MB_Size x num_classes -1. See Official doc on this method.

So, due to your fully connected layer I assume your output is of shape (MB_Size, num_classes). If so, you need to change your code to the following line:

pred = torch.argmax(output,dim=1)

and thats it. Otherwise, just choose the dimension of the logits.

Third thing you want to consider is the dropout and other influences that a training configuration may cause to the inference. For instance, the dropout in some frameworks may require multiplying the ouptut by 1/(1-p) in the inference (or not since it can be done while training), batch normalization may be cancelled since the batch size is different, and so on. Additionally, to reduce memory consumption, no gradients should be computed. Luckily, PyTorch developers are very thoughtful and provided us with torch.no_grad() and model.eval() for that.

I strongly suggest to have a practice for that, possibly changing your code with a few letters:

output = model_transfer.eval()(image)

and your done!

Edit:

This is a simple use case of wrong usage of the PyTorch framework, not reading the docs and not debugging your code. The following code is purely incorrect:

model_transfer.fc.out_features = 133

This line does not actually creates a new fully connected layer. It just changes the property of that tensor. Try in your console:

import torch
a = torch.nn.Linear(1,2)
a.out_features = 3
print(a.bias.data.shape, a.weight.data.shape)

Output:

torch.Size([2]) torch.Size([2, 1])

which indicates that the actual matrix of the weights and the biases vector remain in their original dimension.

A correct way to perform transfer learning is to keep the backbone (usually the convolutional layers until the fully connected ones in these type of models) and overwriting the head (FC layer in this case) with yours. If it is just one fully connected layer that exists in the original model, you do not have to change the forward pass of your model and you're good to go.
Since this answer is already long enough, just visit the Transfer learning tutorial in PyTorch docs to see how it can be done.

Good luck to ya'll.



Related Topics



Leave a reply



Submit