Archive: March 10, 2008

<<< March 9, 2008

Home

March 11, 2008 >>>


managed memory leaks

Monday,  03/10/08  08:39 PM

(... I had such a good time with my rant about the .NET CLR yesterday, I thought I'd continue the series...)

This is another in my series of foaming rants whereby you the reader become convinced of my status as a coding dinosaur. So be it.

<rant type=foaming optional=absolutely>

As part of my subscription to Microsoft’s Developer Network (MSDN) I receive monthly issues of MSDN Magazine. You might think this would be a great thing, full of useful information, but really it is just a weird wonder to me. For one thing, most of the technology under discussion in this magazine is only of esoteric interest – things like Avalon (the replacement for GDI in Windows Vista), XAML (the non-procedural language used to define GUIs in Avalon), and Indigo (a new technology for remote object instantiation, supposedly fast enough to be useful, unlike COM+ and DCOM).

{
Imagine all the time wasted by all those people who read all those articles about the technical details of WinFS, the database-like file system which was going to be in Windows Vista, but which was scrapped because it was too slow. But I digress.
}

The point of view taken by this magazine is that the proper study for a software engineer is to learn how to use Microsoft’s tools to get their work done. Not how to understand user requirements, not how to design systems that work, not how to build code which runs fast and robustly, no, the idea is that if only you understand the tools well enough, they will do all that other stuff for you! Nobody I know believes this to be true – nobody just outside their senior year of college, anyway – but that is the point of view of the magazine. (Which I suspect is edited by people just outside their senior year of college.)

Okay, enough of that, on to today’s subject; which is: memory management! Ta da!

As we all know, memory leaks are one of the crummy things that have plagued programmers since the first core memory at time zero. One of the Really Great Things about managed code is that it completely solves memory leaks! Yes, that’s right, if you write your applications in C# or ASP.NET you will never have to worry about memory management.

(Stops to move a big grain of salt lying next to keyboard…)

So in a recent issue of MSDN Magazine one can find an article entitled “Identify and Prevent Memory Leaks in Managed Code”. How can this be? Memory leaks in managed code?

See, it turns out that there are two kinds of memory leaks. First, there are situations where memory is allocated, never to be freed again (or at least, never until the process is terminated – one of the best arguments for CGI in web applications, but I digress). This is the garden variety memory leak we all know and love. Second, there are situations where memory is allocated, and it will be freed someday, but it hasn’t been freed yet. This is a new exotic type of memory leak we don’t all know and don’t love, and which is the subject of the article in question. The difference is a bit philosophical, because if your program runs out of memory it doesn’t really matter whether there was a bunch of memory just about to be freed or not.

{
And actually under Windows your program doesn’t run out of memory, instead it just starts paging, along with all the other stuff trying to run, and because Windows has the worst paging algorithms of any modern OS this means performance death. So a user ends up trying to figure out why their machine has stopped with the disk light on solid, instead of wondering why a particular program died. (At least if a program dies you can restart it, but if you reach a paging heat death all you can do is go out for coffee, and flirt with the baristas at Starbucks. But I digress.)
}

The situation is this. Under the covers, “managed code” means your program is a bunch of hints to a master program about what to do. The master program is called the CLR (Common Language Runtime, which is a misnomer, because it is actually an interpreter, running code in an intermediate language called MIDL (Microsoft Intermediate Language), but I digress). When objects are allocated by your program, the CLR grabs memory from a heap. Your program never needs to deallocate objects explicitly, instead the CLR has a background activity called the Garbage Collector (GC) which figures out which objects are no longer being used, and puts them back on the heap.

In the old world of compiled code, you had to remember which objects you’d allocated, and explicitly free them. This was crummy and led to a lot of garden variety leaks. In the new world of managed code, you don’t have to remember which objects you’ve allocated, and you don’t have to explicitly free them. This is great, no more garden variety leaks! Unfortunately the GC is sitting back there trying to figure out which objects are no longer being used, and when to put them back on the heap. It may have a bunch of objects which it will free someday, but which it isn’t sure it should delete now. These can lead to an exotic type leak. The article in question has a great term for this situation, a “midlife crisis”! There you are, with all this memory which could be freed, but the GC won’t do it, and your program don’t know this is going on, and poof you run out of memory. The solution is that you have to provide hints to the GC so it knows when to free memory, and thereby avoid a midlife crisis.

Okay, let’s summarize. In the old world of compiled code, you had to remember which objects you’d allocated and explicitly free them. This was crummy and led to a lot of garden variety leaks. In the new world of managed code, you don’t have to remember which objects you’ve allocated, and you don’t have to explicitly free them you have to remember which objects you’ve allocated, and explicitly provide hints to the GC so it will free them for you. This is crummy and leads to a lot of exotic type leaks. Got that?

If you read this article you will really be impressed by the incredible variety of ways memory can be in limbo under the CLR. The kinds of things you have to do so the GC can figure out what to do are amazing. There is really a lot to learn, the situation is much more complicated than the old days, when you just had to delete everything which was new’ed. Whether this represents progress is a matter for debate, but you know which side I’m on…

{
Recently I did some work on ImageScope, Aperio's digital pathology viewer, which uses a bunch of COM objects packaged as the Viewport ActiveX control.

{
COM includes a mechanism for a sort of GC. Anytime you allocate an object, it has a reference count. Each time you create a new pointer to the object, you explicitly increase the reference count by invoking its AddRef() method. Each time you delete or reassign a pointer to the object, you explicitly decrease the reference count by invoking its Release() method. When the reference count reaches zero, there are no pointers to the object, and COM deletes it. This is a pretty useful mechanism although unless you’re careful it leads to a lot of garden variety leaks.
}

So in the course of working on ImageScope, and on Viewport, I had to plug a number of memory leaks. There were some leaks of the old C++ garden variety; objects were new’ed and not later delete’ed. (And in another variety, thank you Windows, there were objects which were SysAllocString()ed and not later SysFreeString()ed. But I digress.) And there were some leaks of the new COM garden variety; objects were AddRef()ed and not later Release()ed.

Care to guess which ones were easier to find?

I’m just glad I didn’t also have leaks of the new C# variety; objects which were new’ed and not later GC’ed (that is to say, not later in the sense of “soon”, rather than not later in the sense of “someday but not now”), because there were no explicit hints to the GC that they could be freed.

{
Taken directly from the article:

“This innocuous-looking code contains a major problem. All of these ASP.NET Page instances just became long-lived objects. The OnCacheItemRemoved is an instance method and the CacheItemRemovedCallback delegate contains an implicit this pointer, where this is the Page instance. The delegate is added to the Cache object. So there now exists a dependency from the Cache to the delegate to the Page instance. When a garbage collection occurs, the Page instance remains reachable from a rooted reference, the Cache object. The Page instance (and all the temporary objects it created while rendering) will now have to wait for at least five minutes before being collected, during which time they will likely be promoted to a Gen2. Fortunately this example has a simple solution. Make the callback function static. The dependency on the Page instance is broken and it can now be collected cheaply as a Gen0 object.

Well I think we can all understand that, right? Certainly managed code has simplified our lives and made memory management much easier.
}

But I digress.
}

<rant>

 

Monday,  03/10/08  09:07 PM

Just another day in paradise; here in Southern California, Spring has officially sprung.  I went for a ride this afternoon in short sleeves, must have been 75, bright and sunny.  The flowers are blowing out of the ground everywhere.  I love it.

I'm not really trying to gloat, but I know it is not so everywhere...

Meanwhile, the Ole filter makes a pass...

So have you been following the whole Democratic delegates from Florida story?  (What is it with Florida, anyway?)  This is amazing; apparently the state went ahead and had a primary too early in the year, so they were stripped of all their delegates.  Since Hillary Clinton "won" Florida, and since it is going down to the wire between her and Barack Obama, this really matters.  Michigan is apparently in the same boat.  I'm tempted to comment, why vote for a Democrat when they can't even select their candidates properly, but I won't. 

WSJ: Bigger monitors = more productivity.  "Researchers at the University of Utah tested how quickly people performed tasks like editing a document and copying numbers between spreadsheets while using different computer configurations: one with an 18-inch monitor, one with a 24-inch monitor and with two 20-inch monitors. Their finding: People using the 24-inch screen completed the tasks 52% faster than people who used the 18-inch monitor; people who used the two 20-inch monitors were 44% faster than those with the 18-inch ones."  This applies equally to digital pathology and software development; in my experience bigger better monitors are a cheap way to help people be more productive... 

Picture of the day: this rather astonishing view of Mercury, Venus, and the Moon, all appearing alongside a radio telescope array.  Fantastic.  (Please click to enlarge.)  "This picturesque conjunction was caught on camera behind elements of the Australia Telescope Compact Array (ATCA) near the town of Narrabri in rural New South Wales. The ATCA consists of six radio telescopes in total, each one larger than a house."  Beautiful technology, in both senses. 

 

 
 

Return to the archive.