## Recommended Posts

Hello, I hope this is the correct section I need help.

It is easy for me to translate this pseudo code into delphi, I have aleady done it. Basically: given 2 integers m and n (both >= 0), it checks if the program is the permutation of another one.

Like 915 and 159 ARE permutations (just swap some numbers) while 911 and 19 are not.

```perm(m, n)
# use an array for counting the difference in the number of digits# of the two numbers. Note that we do not need to count the 0’s as# different number of 0’s can be compensated by putting 0’s in front
# D[i] = difference count for cypher i+1
for i=1 to 10
D[i] = 0

while (m > 0)
# compute the last digit of m
d = m mod 10
if (d > 0)
D[d]++
m = m div 10

while (n > 0)
# compute the last digit of m
d = n mod 10
if (d > 0)
D[d]--
n=n div 10

# returns True if D contains all zeros
i = 1
while ((i<=9) and (D[i]==0))
i++

return (i>9)```

This pseudo code is good and my delphi code works because i have used the above.

QUESTION:

I am not expert but I'd like to analyze the complexity of the pseudo code. Is it O(n)?

Because there are 3 for loops and both loop an a cicle for n or m times so it is linear. Am I correct? I hope this is good section for my question!

Posted (edited)

The complexity is linear with the number of digits. In this case it's probably much easier to write the code if you convert each number to text.

You then need an array of counts for each digit. Count the digits. Finally check that each digit has the same number of occurrences.

In practical terms, complexity arguments aren't actually going to be relevant here. It's the constant of proportionality that will matter most in performance.

Edited by David Heffernan
• 1

Posted (edited)

First 2 loops are O(log n) and O(log m). The last loop is clearly O(9) due to the bound but the O(9) is "asymptotically equal" to O(1). Overall you have O(log n + log m + 1) where 1 is constant so only n and m are involved. That complexity is O(log n + log m) but to be correct it would be O(log max{m, n}) where max gives you the biggest value between m or n. Do not say that for loops are O(n) by default, that would be a wrong assumption! Example here: https://stackoverflow.com/questions/10039098/why-is-this-function-loop-olog-n-and-not-on

Look this example:

``````var x: double;

//x = 1000 maybe
while (x > 1) do
x := x * 0.3;``````

You are executing this loop many times, let's say n, and so you are taking x * (0.3^n) and you want it to be bigger than something, in this case 1. Now you need to get n from that equation but n is an exponent so you can use logarithms to take the n down like:

x * (0.3^n) = 1 -> log(0.3^n) = log(1/x) -> nlog(0.3) = log(1/x) and so on, at the end you'll get an asymptotic O(log n). This is probably not the place to demonstrate what I am saying but if you understand math topics like "limits" and "asymptotes" you can google for this and see that there are methods like substitution, master theorem or trees to get the O(something) notation

Edited by Alberto Miola
• 1
• 1
• 1

Sorry, but I don't see a correlation between the result of a loop operation and its complexity. (Edith says: please don't) Consider this:

```var x: double;

//x = 1000 maybe
while (x > 1) do
x := x + x;

while (x > 1) do
x := 2 * x;     ```

Both loops will be O(n).

Edith says: No they wont, stupid C&P error. And lack of basic understanding. Oy vey!

Posted (edited)
41 minutes ago, Sherlock said:

Sorry, but I don't see a correlation between the result of a loop operation and its complexity. Consider this:

```
var x: double;

//x = 1000 maybe
while (x > 1) do
x := x + x;

while (x > 1) do
x := 2 * x;     ```

Both loops will be O(n).

No, in your example both loops will be O(INFINITE).

Well, not quite. It would be O(MaxDouble / x) to be precise.

Edited by M.Joos
• 1

The code needs max log(maxint) steps for the first and the second loop, max 9 steps for the third loop. As the integer size is constant for a given platform, this is just O(1).

• 1

2 hours ago, M.Joos said:

No, in your example both loops will be O(INFINITE).

Well, not quite. It would be O(MaxDouble / x) to be precise.

Yes, you are right. Sorry about that. Typical lazy c&p.

Posted (edited)
4 hours ago, Sherlock said:

Sorry, but I don't see a correlation between the result of a loop operation and its comp﻿lexity. Consider this:

```
var x: double;

//x = 1000 maybe
whi﻿le (x > 1) do
x := x + x;

while (x > 1) do
x := 2 * x; ﻿﻿    ```

Both loops will be O(n).

Those loops are simply not valuable in big O terms because O(infinite) does not exist. f(N) = O(g(N)) if |K*g(n)| => f(n) for K constant and n > n0; this stuff means that the O(something) is a function f that stays below another function g starting from an n0 point. But f and g are both REAL functions and infinite is not in real (R* group) so that's not O(n) and O(infinite) gives the idea but that's mathematically wrong. I don't want to look boring with math things so a reference is here https://en.wikipedia.org/wiki/Big_O_notation and https://en.wikipedia.org/wiki/Taylor_series here (it explains better than me!! 🙂 )

Regarding the loop question, there is a correlation between the "result" of a loop operation and the complexity. It's not the result but it is a point (like 1 in this case) which determines the "starting point" of the asymptotic evaluation. I can't explain well in english this topic but if you have math skill you can understand looking at the definition of O(n) (where the similar applies to sigma(n) and theta(n):

Edited by Alberto Miola

Posted (edited)

Ok I do not understand. I have basic math knowledge.

Are there basic rules to understand the complexity? Like understand if is O(n) or O(log n)

Edited by Roberto Dall'Angelo

Can I ask why you are interested in complexity of this algorithm? If it is for the sake of learning, fair enough, but there are far better examples to teach you. If it is for a real application, complexity is the wrong question to ask.

2 hours ago, Roberto Dall'Angelo said:

Are there basic rules to understand the complexity? Like understand ifis O(n) or O(log n)

If we are talking about time complexity (which is the usual case) and not space complexity, you can think about O() as telling you how slower your code will become if you increase the input size. Very roughly put, O(some_function_of_n) will make the code some_function_of_n slower if you increase the input by a factor of n.

If your code needs M time to process m elements and you now run it on 20*m elements, O(1) will do that in time M, O(n) in time 20*M, O(n^2) in time 400*M, and O(log n) in time 4*M.