Map.clear() vs new Map : Which one will be better?
Complicated question. Let's see what happens.
You instantiate a new instance, which is backed with new array. So, garbage collector should clear all the key and values from the previous map, and clear the reference to itself. So O(n) algorithm is executed anyway, but in the garbage collector thread. For 1000 records you won't see any difference.
BUT. The performance guide tells you that it is always better not to create new objects, if you can. So I would go with clear()
method.
Anyway, try both variants and try to measure. Always measure!
Most efficient way to clear a Java HashMap
I would prefer clear()
because you can have the Map
as final
member.
class Foo {
private final Map<String, String> map = new HashMap<String, String>();
void add(String string) {
map.put(string, "a value");
}
void clear() {
map.clear();
}
}
If you assign a new Map
every time you can run into multithreading issues.
Below is an almost threadsafe example for using a Map
wrapped in Collections.synchronizedMap
but it assigns a new map every time you clear it.
class MapPrinter {
private static Map<String, String> createNewMap() {
return Collections.synchronizedMap(new HashMap<String, String>());
}
private Map<String, String> map = createNewMap();
void add(String key, String value) {
// put is atomic due to synchronizedMap
map.put(key, value);
}
void printKeys() {
// to iterate, we need to synchronize on the map
synchronized (map) {
for (String key : map.values()) {
System.out.println("Key:" + key);
}
}
}
void clear() {
// hmmm.. this does not look right
synchronized(map) {
map = createNewMap();
}
}
}
The clear
method is responsible for a big problem: synchonized(map)
will no longer work as intended since the map
object can change and now two threads can simultanously be within those synchronized
blocks since they don't lock the same object. To make that actually threadsafe we would either have to synchronize completely externally (and .synchronizedMap
would be useless) or we could simply make it final
and use Map.clear()
.
void clear() {
// atomic via synchronizedMap
map.clear();
}
Other advantages of a final Map
(or anything final
)
- No extra logic to check for
null
or to create a new one. The overhead in code you may have to write to change the map can be quite a lot. - No accidential forgetting to assign a
Map
- "Effective Java #13: Favor Immutability" - while the map is mutable, our reference is not.
Does method map=null and map.clear() are the same?
The last one is the best one if you not coding for a system with very limited memmory then it's the first one that is best
- In the first case you have to clear the hashtabel whitch takes some computation.
- The secound won't even work since you just got a null reference and not a hashmap.
- In the third case you just throw away the old hashmap and let the garbage collector handle the old one.
Why does clear hashmap method clears added map in array list
dataList.add(map)
will put a reference to map
in the list, so it's not a copy. When you then do map.clear()
afterwards, it erases the content of the map in the list too, because it is the very same object. Do dataList.add(map.clone())
instead or (preferably) do map = new HashMap<>();
afterwards.
map.put(Answer.ID, "0");
map.put(Answer.IMAGE, "color_icon_awesome");
map.put(Answer.TITLE, firstOption);
dataList.add(map);
map = new HashMap<>();
Sidenote: Your code looks like you could use an object instead of the map:
class AnswerObject {
private String id;
private String image;
private String title;
public AnswerObject(String id, String image, String title) {
this.id = id;
this.image = image;
this.title = title;
}
// some getters and setters and some other usefull code
}
This should make your code nicer and more readable
List<AnswerObject> dataList = new ArrayList<>();
dataList.add(new AnswerObject("0", "color_icon_awesome", firstOption));
dataList.add(new AnswerObject("1", "color_icon_awesome", secondOption));
dataList.add(new AnswerObject("2", "color_icon_awesome", thirdOption));
dataList.add(new AnswerObject("3", "color_icon_awesome", fourthOption));
But feel free to ignore that ;-)
Map clear vs null
If a map is not referenced from other objects where it may be hard to set a new one, simply null
-ing out an old map and starting from scratch is probably lighter-weight than calling a clear()
, because no linear-time cleanup needs to happen. With the garbage collection costs being tiny on modern systems, there is a good chance that you would save some CPU cycles this way. You can avoid resizing the map multiple times by specifying the initial capacity.
One situation where clear()
is preferred would be when the map object is shared among multiple objects in your system. For example, if you create a map, give it to several objects, and then keep some shared information in it, setting the map to a new one in all these objects may require keeping references to objects that have the map. In situations like that it's easier to keep calling clear()
on the same shared map object.
Keep the value of the main map after clearing map
The reason this is happening is because you are using the same key.
If the map previously contains a mapping for the key, the old value
is replaced.
Update
<>.add(map) will put a reference to map in the list, so it's not a
copy. When you then do map.clear() afterwards, it erases the content
of the map in the list too, because it is the very same object. Do
<>.add(map.clone()) instead or (preferably) do map = new HashMap<>();
solution
LinkedHashMap<String, Object> holder = new LinkedHashMap<String, Object>();
final HashMap<String, LinkedHashMap<String, Object>> mainHolder = new LinkedHashMap<String, LinkedHashMap<String, Object>>();
holder.put("firstName", "Alex");
holder.put("lastName", "Cruz");
mainHolder.put("apple", holder);
holder = new LinkedHashMap<>();
LinkedHashMap<String, Object> temp = mainHolder.get("apple");
temp.put("quantity",13);
mainHolder.put("apple",temp);
System.out.println(mainHolder);
what is faster hashset clear or new hashset?
Although clear
might be more performant, this depends on the size of the set. In practice, this is not likely to make a significant difference in the performance of your application. Even in the lines of code around this function, performance will be dominated by other factors such as JIT compilation.
What is important is design quality, which will make it easy to refactor for performance after you have profiled your code. In most cases, avoiding hard-to-track state changes is important, and creating a new HashSet
is better design than reuse of a HashSet
.
Related Topics
How to Read Jpeg Image Using Imageio.Read(File File)
Encoding Url Query Parameters in Java
How to Convert Image to Byte Array in Java
Http Get Using Android Httpurlconnection
Variable Length (Dynamic) Arrays in Java
When/Why to Call System.Out.Flush() in Java
Java Gotoxy(X,Y) for Console Applications
The Type Java.Lang.Charsequence Cannot Be Resolved in Package Declaration
Differencebetween Unidirectional and Bidirectional JPA and Hibernate Associations
How to Make a New List in Java
Private Final Static Attribute VS Private Final Attribute
How to Use Jndi Datasource Provided by Tomcat in Spring
Getting Java Gui to Open a Webpage in Web Browser