# Best data structure to maintain a small list of items that have changing values...

## Recommended Posts

I have a small list of 200 items. Each item has a value. Over time, and with every iteration of my algorithm, each item's value is changed, but only slightly (e.g. ±2%). At any given time I'd like to quickly know the item with the smallest value (I don't care about any other value). What's the best data structure to accomplish this is a speedy manner?

I initially thought of a priority queue, but I'm changing the value of each item in the list.

Thanks,

Steve

For only 200 values you would have to be running your algorithm many times a second in order to detect any differences.  I would simply put all the values in an indexed array and every time you update a value, simply check it against the previous lowest value.  Then save its index and update the lowest value if necessary.  Whether this is practical or not depends on how many threads are simultaneously updating the values, how long the algorithm takes to run, and what the intervals are between iterations of the algorithm.  I have no idea whether this is the fastest way to do it, but it is simple, and for 200 items, as I said, I would not care if it took a few microseconds longer.  If microseconds are really important to you, then it is essential to test and measure, not guess; so you must ignore my advice.

• 2

Thanks Tim,

The algorithm updates the items' values 100k to 1 million times a second, so speed is important. Nevertheless, you might be right; I should probably keep it simple.

Steve

Depends on what happens more often: changing the values or asking the lowest item. Getting the min of 200 unordered items takes 199 comparisons. Keeping the min item up to date on every update takes a comparison every time you update an item (either to know if the item you updated is now lower than the current min item or if it is still the min item).

Edited by Stefan Glienke

Hi Stefan,

Each item's value changes slightly on each iteration. Note also, finding the min item may take more than one comparison per change (if the min item's value changes such that it's not now the min item).

Thanks,

Steve

1 hour ago, Steve Maughan said:

Hi Stefan,

Each item's value changes slightly on each iteration. Note also, finding the min item may take more than one comparison per change (if the min item's value changes such that it's not now the min item).

Thanks,

Steve

On each iteration, each value is modified. So, during the iteration, just keep track of the smallest value that you have seen to date.

You memorize the position of the smallest value after each change.

"Only interested in the min value". So why store anything BUT the min value?

1 hour ago, Dany Marmur said:

"Only interested in the min value". So why store anything BUT the min value?

Hi Dany,

Because the min before may change and not be the min before. I don't want to have to it's through to find a new minimum value.

Thanks,

Steve

Thanks everyone.

It looks like a modified priority queue is the best solution. This keeps track of the minimum value and self balances when an internal node's value changes.

Cheers,

Steve

1 hour ago, Steve Maughan said:

It looks like a modified priority queue is the best solution. This keeps track of the minimum value and self balances when an internal node's value changes.﻿

No way - a priority queue that balances when any value changes takes way more work than n-1 comparisons

I am pretty sure that nothing beats the following code performance nor simplicity wise:

```for i := Low(data) to High(data) do
begin
// update data[i]
if (i = 0) or (data[i] < min) then
min := data[i];
end;```

Edited by Stefan Glienke

I think it should be

`min := data[i];`

But it wouldn't be faster this? Why to check 200 times (i = 0)?

```min := data[0];
for i := Low(data) + 1 to High(data) do
begin
if (data[i] < min) then
min := data[i];
end;```

• 1

Typo and for whatever reason it does not let me edit it. Edit: now it worked, fixed

If you can prove that checking i for 0 impacts performance any negatively then sure go for your version. My point still stands regardless.

Edited by Stefan Glienke
• 1

`If data[poscurr]<data[posmin] then posmin:=poscurr;`

it seems that you need an implementation of the abstract data type “priority queue” with operations like:

- create

- change key (decrease and/or increase)

- insect

- delete (only item with min key or any item?)

- findmin

The “best” implementation depends on the number of times each operation is called.

Furthermore, if you know something about the range of the possible keys you might use that information to speedup your implementation.

Within the area of shortest path calculations where the arc costs are integers bounded by a constant C you might use the Bucket implementation of Dial.

In case of non-integer keys or when you encounter multiple empty buckets you can search for multilevel bucket implementations or buckets that contant a range of keys.

Again, since there are a lot of possible implementations I can’t provide an answer. But you might search for shortest path implementations a lot of the literature is about a priority queue.

Just one other idea derived from shortest path algorithms on large scale graphs (10M nodes) usually visit often only a few nodes (~10K). Then it might make sense to not create a priority queue that reserves memory for the keys of nodes. A better idea might be to implement an ISet, which easily can be done using a dictionary.

Edited by Henk Post

IMHO what is lacking in this thread is a description of your actual input. Do you get 200 values every "input iteration" or just some of them? And are you getting deltas as input (key; delta value)?

On 6/24/2019 at 8:11 AM, Stefan Glienke said:

No way - a priority queue that balances when any value changes takes way more work than n-1 comparisons

I am pretty sure that nothing beats the following code performance nor simplicity wise:

```
for i := Low(data) to High(data) do
begin
// update data[i]
if (i = 0) or (data[i] < min) then
min := data[i];
end;```

HI Stefan,

You may well be right. I need to test.

Thanks,

Steve