Jump to content
Mike Torrettinni

Why TList uses a lot more memory than TArray?

Recommended Posts

2 hours ago, David Heffernan said:

I'm talking about an array like structure (i.e. O(1) random access, contiguous indices) whose backing allocation is not contiguous.

 

I'm not aware of many (if any) collections in widespread use that are like this.

There was once: Look up STColl (part of TurboPower's tpsystools):

 

- STCOLL theoretically allows up to 2 billion elements. The collection
    is "sparse" in the sense that most of the memory is allocated only
    when a value is assigned to an element in the collection.

  - STCOLL is implemented as a linked list of pointers to pages. Each
    page can hold a fixed number of collection elements, the size
    being specified when the TStCollection is created. Only when an
    element with a given index is written to is a page descriptor and a
    page allocated for it.  However, the first page is allocated when the
    collection is created.

 

Unfortunately TurboPower went out of business back in - hm, the article on slashdot says 08/01/03, so was it 2008 or was it 2003? Damn stupid American date format - I think it was 2003. Their libraries were open sourced though.

Share this post


Link to post

Unfortunately what was great in 2003 might not be anymore because hardware evolved.

 

However what is true is that once the contiguous array passes a certain size the memory will be across multiple pages anyway and then it does not matter much if it were not contiguous - only the calculation of the index needs to be as fast as possible.

Share this post


Link to post
10 minutes ago, Stefan Glienke said:

Unfortunately what was great in 2003 might not be anymore because hardware evolved.

True, but even in 2003 it was a 16-bit leftover from the DOS days.

Share this post


Link to post
1 hour ago, Anders Melander said:

Please google "virtual memory".

Even virtual memory could be insufficient if an app requests to expand a 1G array

Share this post


Link to post
30 minutes ago, dummzeuch said:

There was once: Look up STColl (part of TurboPower's tpsystools):

Doubt usage was widespread

Share this post


Link to post
1 minute ago, Fr0sT.Brutal said:

Even virtual memory could be insufficient if an app requests to expand a 1G array

Why? What is special about 1G?

Share this post


Link to post
3 minutes ago, Fr0sT.Brutal said:

Even virtual memory could be insufficient if an app requests to expand a 1G array

Please google "64-bit"

  • Haha 1

Share this post


Link to post

I can easily create single array that exceeds 1GB:

image.png.1ca269ff9d673d22b9e68488d23008d0.png

 

using {$SETPEFLAGS IMAGE_FILE_LARGE_ADDRESS_AWARE} even bigger single array: 

 

image.png.8afe207069bbff22980fa961590408e3.png

 

32 bit project.

Edited by Mike Torrettinni

Share this post


Link to post
50 minutes ago, Fr0sT.Brutal said:

Even virtual memory could be insufficient if an app requests to expand a 1G array

Aha, expand, I missed it the first time. My example above is not good then.

Share this post


Link to post

Dude, we are talking about 64 bit code. You can't expect to make a 1GB array and do anything useful with it in a 32 bit process. Every chance that the virtual memory manager won't be able to find 1GB contiguous address space.  This is virtual memory, and in 32 bit code usually the limits are from address space rather than physical memory.

Share this post


Link to post
45 minutes ago, David Heffernan said:

in 32 bit code usually the limits are from address space rather than physical memory.

Yes, probably I will get to the similar limit even in 64bit code, if customer has 4GB memory. But in this case, its easier to upgrade memory and all is good. So, I guess 64bit will make it easier on large arrays or lists (now that I know they consume similar/same memory size 🙂 )

Share this post


Link to post
Just now, Mike Torrettinni said:

Yes, probably I will get to the similar limit even in 64bit code, if customer has 4GB memory.

No.

You still don't understand the difference between physical and virtual memory.

  • Like 1

Share this post


Link to post

So, you are saying I can have more than 4GB in one (or more) arrays in 64bit project, even if customer has 4GB memory installed? If yes, then this is awesome news!

I know optimization is very good, but sometimes it can be just overkill, especially when dealing with large log or data files - and without memory restrictions in 64bit, I could save time optimizing memory consumption to something more useful.

Share this post


Link to post

No. Wrong conclusion. That memory will be backed by the swap file. User may not have one. And even if they do perf is terrible.

 

Seriously, you are way down the rabbit hole. Give up thinking about the advanced stuff and concentrate on the basics. Make sure you know how lists work. 

Share this post


Link to post

Yes, I agree. Well, now that I see lists are not that much better at memory consumption compared to arrays, I don't know, will have to try and see what I'm comfortable with. I've been using array for so long, it's hard to switch.

 

Thanks.

Share this post


Link to post

How could lists possibly consume less than an array?!!! 

 

Let me ask you a question, do you have any code that does SetLength(arr, Length(arr) + 1)?

Share this post


Link to post

Yes I have a few, but not in loops. A few places I found are used like this in simple reports when it's just a few lines, like report headers or category selection views. Not in the loading, parsing... heavy data manipulating methods.  But I had plenty of these when I started, so many years ago 🙂

Share this post


Link to post

Well, in those cases +1 is used, but all those cases are for up to 10 records, mostly. Not sure where are you going with the questions?

Share this post


Link to post
1 hour ago, David Heffernan said:

That memory will be backed by the swap file.

Technically it's the page file.

The terms page file and swap file have their roots in VMS. Both Windows NT and VMS were designed by his holiness, Dave Cutler.

 

The swap file doesn't exist anymore in Windows (I believe it was there in early versions) but was used when a process working set was completely removed from physical memory. E.g. in low memory situations or if the process went into extended hibernation.

Share this post


Link to post

Then you'll fragment your address space and perhaps worse spend time doing expensive heap allocation.

 

I don't know why you think

 

SetLength(arr, Length(arr) + 1);

arr[High(arr)] := item;

 

is preferable to

 

list.Add(item);

 

And what about if you hold classes in a list, how do you manage lifetime?

 

Lots of good reasons to use a good library of collection classes. 

Share this post


Link to post
1 minute ago, Anders Melander said:

Technically it's the page file.

The terms page file and swap file have their roots in VMS.

Two names for the same thing in my eyes. But if the official title is page file then that's what we should say, I agree. 

  • Like 1

Share this post


Link to post
16 hours ago, Anders Melander said:

Please google "64-bit"

Please google "swap file size limit"

 

16 hours ago, David Heffernan said:

Why? What is special about 1G?

Just a pretty large size for example. This could be 500M or 2G, doesn't matter.

Share this post


Link to post
2 minutes ago, Fr0sT.Brutal said:

Just a pretty large size for example. This could be 500M or 2G, doesn't matter.

I have no qualms about working with a 2G memory block under 64 bit. What exactly are you getting at? Can you explain in technical terms what problem you envisage? 

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×