How can garbage collectors be faster than explicit memory deallocation

De openkb
Aller à : Navigation, rechercher

Sommaire

Questions

http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps http://www.hpl.hp.com/personal/Hans_Boehm/gc/myths.ps

GC Myth 3: Garbage collectors are always slower than explicit memory deallocation.
GC Myth 4: Garbage collectors are always faster than explicit memory deallocation.

This was a big WTF for me. How would GC be faster then explicit memory deallocation? isnt it essentially calling a explicit memory deallocator when it frees the memory/make it for use again? so.... wtf.... what does it actually mean?

Very small objects & large sparse heaps ==> GC is usually cheaper, especially with threads

I still don t understand it. Its like saying C++ is faster then machine code (if you don t understand the wtf in this sentence please stop programming. Let the -1 begin). After a quick google one source suggested its faster when you have a lot of memory. What i am thinking is it means it doesn t bother will the free at all. Sure that can be fast and i have written a custom allocator that does that very thing, not free at all (void free(void*p){}) in ONE application that doesnt free any objects (it only frees at end when it terminates) and has the definition mostly in case of libs and something like stl. So... i am pretty sure this will be faster the GC as well. If i still want free-ing i guess i can use an allocator that uses a deque or its own implementation thats essentially

if (freeptr < someaddr) {
    *freeptr=ptr;
    ++freeptr; 
}
else
{
    freestuff();
    freeptr = freeptrroot;
}

which i am sure would be really fast. I sort of answered my question already. The case the GC collector is never called is the case it would be faster but... i am sure that is not what the document means as it mention two collectors in its test. i am sure the very same application would be slower if the GC collector is called even once no matter what GC used. If its known to never need free then an empty free body can be used like that one app i had.

Anyways, i post this question for further insight.

Answers

How would GC be faster then explicit memory deallocation?
    • GCs can pointer-bump allocate into a thread-local generation and then rely upon copying collection to handle the (relatively) uncommon case of evacuating the survivors. Traditional allocators like malloc often compete for global locks and search trees.
    • GCs can deallocate many dead blocks simultaneously by resetting the thread-local allocation buffer instead of calling free on each block in turn, i.e. O(1) instead of O(n).
    • By compacting old blocks so more of them fit into each cache line. The improved locality increases cache efficiency.
    • By taking advantage of extra static information such as immutable types.
    • By taking advantage of extra dynamic information such as the changing topology of the heap via the data recorded by the write barrier.
    • By making more efficient techniques tractable, e.g. by removing the headache of manual memory management from wait free algorithms.
    • By deferring deallocation to a more appropriate time or off-loading it to another core. (thanks to Andrew Hill for this idea!)

Source

License : cc by-sa 3.0

http://stackoverflow.com/questions/5815936/how-can-garbage-collectors-be-faster-than-explicit-memory-deallocation

Related

Outils personnels
Espaces de noms

Variantes
Actions
Navigation
Outils