Project Euler is a website that lists many mathematical problems that its users solve by coming up with many diverse methods implemented in many different programming languages. Because each programming language has a slightly (or major) difference in how you approach to writing it, oftentimes when sharing solutions to these problems in different languages you'll eventually find more refined or efficient approaches to the solution of each problem that you may not have originally considered, which I find to be exceptionally fun.
I've been using this website since back when I was in high school to pass the time when I had access to a computer at the labs, it's easy enough to find portable compilers or online interpreters for languages that even on a system without administrative access you could very quickly get one running and start working away at the problems on this website. What I especially liked about it was that because the problems were relatively straightforward and mathematical in nature, you could develop algorithms for solving them rather quickly and could even revise your methods when not at the computer while doing other things. Further, because each problem is self-encapsulated you don't need to worry about sinking an enormous amount of time into any given project or worry about dependencies as all you really need is a buffer out and basic language constructs to solve these problems.
Lately as we've been going into the winter months there's been significantly less work to do at the camp, and as a result I often get a lot of time where I'd otherwise be standing around or doing nothing particularly useful. Because my boss is cool he lets me fuck around on the maintenance department computer so I decided to try my hand at solving these again since I'm required to physically be here to get paid even if I'm not doing anything useful. I also thought it would be neat to have an excuse to think about more complex mathematical things again since I don't often have a reason to do so normally.
To start off the thread I solved the first two problems in C which were very simple, so in order to challenge myself a bit I also wrote implementations in the FALSE programming language, probably my favorite stack-based esoteric language. I'll only share the FALSE programs as the C programs are rather trivial.
Problem 1:
Problem 2:
Both problems deal with determining divisibility, the first in terms of factors and the second in terms of parity. Because FALSE has no modulo operator and can only operate on integers, I decided the easiest approach would be to divide by the check and then multiply by it again, as if the division resulted in a remainder the decimal would be truncated and multiplying by it again would yield a result that is not equal to the beginning number and therefore is not divisible by the check number. I also especially liked how the second program turned out, as I thought it was especially cute how I was able to reduce the Fibonacci calculation on the stack to simply \$@+\, something that takes four lines of C code to accomplish the same thing.
Please feel free to share your algorithms and implementations if you decide to give it a try! It's a very fun thing to collaborate with others on and I'm excited to see what others here come up with.
I've been using this website since back when I was in high school to pass the time when I had access to a computer at the labs, it's easy enough to find portable compilers or online interpreters for languages that even on a system without administrative access you could very quickly get one running and start working away at the problems on this website. What I especially liked about it was that because the problems were relatively straightforward and mathematical in nature, you could develop algorithms for solving them rather quickly and could even revise your methods when not at the computer while doing other things. Further, because each problem is self-encapsulated you don't need to worry about sinking an enormous amount of time into any given project or worry about dependencies as all you really need is a buffer out and basic language constructs to solve these problems.
Lately as we've been going into the winter months there's been significantly less work to do at the camp, and as a result I often get a lot of time where I'd otherwise be standing around or doing nothing particularly useful. Because my boss is cool he lets me fuck around on the maintenance department computer so I decided to try my hand at solving these again since I'm required to physically be here to get paid even if I'm not doing anything useful. I also thought it would be neat to have an excuse to think about more complex mathematical things again since I don't often have a reason to do so normally.
To start off the thread I solved the first two problems in C which were very simple, so in order to challenge myself a bit I also wrote implementations in the FALSE programming language, probably my favorite stack-based esoteric language. I'll only share the FALSE programs as the C programs are rather trivial.
Problem 1:
1i:
0s:
[1000 3i;*>]
[
s;3i;*+ s:
1000 5i;*>
5i;* 3/ 3* 5i;* =~
&[s;5i;*+ s:]?
i;1+i:
]
#
s;.
Problem 2:
0s:
2 1
[$ 4000000 \>]
[
$$ 2/ 2*
=[$s;+ s:]?
\$@+\
]
#
s;.
Both problems deal with determining divisibility, the first in terms of factors and the second in terms of parity. Because FALSE has no modulo operator and can only operate on integers, I decided the easiest approach would be to divide by the check and then multiply by it again, as if the division resulted in a remainder the decimal would be truncated and multiplying by it again would yield a result that is not equal to the beginning number and therefore is not divisible by the check number. I also especially liked how the second program turned out, as I thought it was especially cute how I was able to reduce the Fibonacci calculation on the stack to simply \$@+\, something that takes four lines of C code to accomplish the same thing.
Please feel free to share your algorithms and implementations if you decide to give it a try! It's a very fun thing to collaborate with others on and I'm excited to see what others here come up with.
solved the fourth problem after thinking about it for about a day, figured i would write about it since i took a pretty notable dead-end while analyzing the problem. at first i wondered if i could reduce the construction of the palindromic numbers by composing the three-digit product as a multiplication of trinomials composed of the digits of each number and then setting that equal to a system of equations that composed a palindromic number but after staring at that deconstruction for a good hour i couldn't really come to any reasonable conclusions.
while laying in bed at midnight listening to 130 db bird sound ear drill asmr it occurred to me that i could just construct the set of palindromic numbers backwards from the nearest to 999*999 and for each number run a test on factors from 100 up to the integer square root of the number, and then divide each factor into the number, giving a second factor. if the second factor is three digits, the first successful result going backwards in the set of palindromic numbers would then be the highest palindrome composed of two three-digit factors
i did this and it worked, though originally in the two interior loops i did not include 0 which caused the result to be incorrect, though the incorrect result was 888888 which would be pretty cash money if that was actually the answer but it is not. here is my approach:
might write a FALSE version but idk i wanna get into more complicated problems faster
while laying in bed at midnight listening to 130 db bird sound ear drill asmr it occurred to me that i could just construct the set of palindromic numbers backwards from the nearest to 999*999 and for each number run a test on factors from 100 up to the integer square root of the number, and then divide each factor into the number, giving a second factor. if the second factor is three digits, the first successful result going backwards in the set of palindromic numbers would then be the highest palindrome composed of two three-digit factors
i did this and it worked, though originally in the two interior loops i did not include 0 which caused the result to be incorrect, though the incorrect result was 888888 which would be pretty cash money if that was actually the answer but it is not. here is my approach:
#include <stdio.h>
#include <stdint.h>
#include <math.h>
int main() {
for(int i = 9; i >= 1; --i) {
for(int j = 9; j >= 0; --j) {
for(int k = 9; k >= 0; --k) {
int n = i * 100000 + j * 10000 + k * 1000 + k * 100 + j * 10 + i;
int n_sqrt = (int)sqrt(n);
if(n > 999*999) continue;
for(int x = 100; x <= n_sqrt; ++x) {
if(n % x == 0 && (n / x >= 100 && n / x <= 999)) {
printf("%i: %i * %i\n", n, x, n / x);
return;
}
}
}}}
}
might write a FALSE version but idk i wanna get into more complicated problems faster
felt like a crutunkulous shnunk so i solved problem 12 in FALSE with the added condition that i could not use any variables. features isqrt using heron's method and my patented factor check from the original op post !!!!
(you could conceivably optimize this to run faster by making the isqrt portion keep track of the last result and exit the loop once the series converges ............. however i am tired of pressing the @ key so i did not do this. additionally there are conditions where the series does not properly converge on the domain of natural numbers so you would need an exit state that would quit after n iterations either way)
6 3 4
[$500\>]
[
%1+$@+
$$2/0
[$20\>]
[
@@$@$@/ @+ 2/
@1+
]#%
0\
[$0>]
[
@$@$@\/
\$@*@$@=
[@2+@@]?
@@1-
]#%\%@\
]#@.
(you could conceivably optimize this to run faster by making the isqrt portion keep track of the last result and exit the loop once the series converges ............. however i am tired of pressing the @ key so i did not do this. additionally there are conditions where the series does not properly converge on the domain of natural numbers so you would need an exit state that would quit after n iterations either way)
These are fun! FALSE looks interesting, I'll have to give it a shot for one of these. Also didn't know it was made by the same guy that made Sauerbraten, neato
For shits and giggles I wrote a little Perl one-liner for Problem 14, longest collatz sequence:
For shits and giggles I wrote a little Perl one-liner for Problem 14, longest collatz sequence:
$_=sub{my$n=pop;my$l=1;$l++while$n!=1&&($n=$n%2?3*$n+1:$n/2);return$l};for my$t(1..1e6){my$l=$_->($t);($\,$m)=($t,$l)if$l>$m}print"";
I once shot a man in Reno, just to watch him die.
i was just working on solving that problem last night, though i forgot to keep track of the highest value starting the chain rather than just the longest chain length like a complete charlatan so i had to sorta hack it into the program after i finished writing all of the logic (though i finished it the following day because i was tired :3). here it is in FALSE:
seeing that little one-liner is a real blast to the past, i haven't seen anything written in perl for such a long time that i often forget that it even exists. it's always incredible to me how such popular languages at any given time can so quickly fall out of favor in the general practice, especially over the last two decades. i do highly recommend giving FALSE a try, it's a lot of fun!
the next solution i poast will be in a totally different language from C or FALSE to keep things mixed up, gotta decide which ..
1 1 1
[$1000000\>]
[
$1\
[$1=~]
[
$2/2*\$@=
$[\2/\]?
~[3*1+]?
\1+\
]#%
@$@$@>
$[%\%@%\$@@ 1_]?
~[%\]?
1+
]#@.
seeing that little one-liner is a real blast to the past, i haven't seen anything written in perl for such a long time that i often forget that it even exists. it's always incredible to me how such popular languages at any given time can so quickly fall out of favor in the general practice, especially over the last two decades. i do highly recommend giving FALSE a try, it's a lot of fun!
the next solution i poast will be in a totally different language from C or FALSE to keep things mixed up, gotta decide which ..
had a bunch of time on a plane with nothing to do so i fucked with this more involved solution to problem 14 for a while and even though conceptually i thought using a hash map would shorten the calculation time for each entry, for some reason it just ended up making it take longer to evaluate than if i just returned 0 from hmap_read. really not sure why, i fucked with a few parameters to see if i was missing something obvious but the bottleneck seems to be on hmap_read, which i cannot understand. here is the source anyhow because i'm no longer interested in trying to resolve this
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <time.h>
#define HMAP_SIZE 16384
#define HMAP_DEF_LEN 4
typedef struct {
uint64_t*** map;
uint64_t* lengths;
} hmap_t;
int _hash(uint64_t n) {
return ((uint16_t)n ^ *((uint16_t*)(&n)+1)
^ *((uint16_t*)(&n)+2) ^ *((uint16_t*)(&n)+3))
% HMAP_SIZE;
}
hmap_t* hmap_init();
void hmap_write(hmap_t*, uint64_t, uint64_t);
uint64_t hmap_read(hmap_t*, uint64_t);
void hmap_free(hmap_t*);
int main() {
hmap_t* map = hmap_init();
uint64_t lseq = 0, lseq_count = 0;
const uint64_t SEARCH = 1000000;
for(uint64_t i = 1; i < SEARCH; ++i) {
uint64_t count = 1;
for(uint64_t j = i; j > 1;) {
uint64_t length;
if((length = hmap_read(map, j)) == 0) {
if(j % 2 == 0)
j /= 2;
else
j = 3 * j + 1;
++count;
} else {
count += length;
break;
}
}
const uint64_t TOLERANCE = HMAP_SIZE * 4;
if(i < TOLERANCE)
hmap_write(map, i, count);
if(count > lseq_count) {
lseq = i;
lseq_count = count;
}
}
printf("%lu: %lu (%lu)\n",
lseq, lseq_count, clock()/CLOCKS_PER_SEC);
hmap_free(map);
return 0;
}
hmap_t* hmap_init() {
hmap_t* map = malloc(sizeof(hmap_t));
map->map = malloc(HMAP_SIZE * sizeof(uint64_t**));
for(uint64_t i = 0; i < HMAP_SIZE; ++i) {
map->map[i] = malloc(HMAP_DEF_LEN * sizeof(uint64_t*));
for(uint64_t j = 0; j < HMAP_DEF_LEN; ++j)
map->map[i][j] = calloc(2, sizeof(uint64_t));
}
map->lengths = malloc(HMAP_SIZE * sizeof(uint64_t));
for(uint64_t i = 0; i < HMAP_SIZE; ++i)
map->lengths[i] = HMAP_DEF_LEN;
return map;
}
void hmap_write(hmap_t* map, uint64_t key, uint64_t value) {
uint64_t k = _hash(key);
for(uint64_t i = 0; i < map->lengths[k]; ++i) {
if(map->map[k][i][0] == 0) {
map->map[k][i][0] = key;
map->map[k][i][1] = value;
return;
}
}
map->map[k] = realloc(map->map[k],
map->lengths[k] * 2 * sizeof(uint64_t*));
for(uint64_t i = map->lengths[k]; i < map->lengths[k] * 2; ++i)
map->map[k][i] = calloc(2, sizeof(uint64_t));
map->map[k][map->lengths[k]][0] = key;
map->map[k][map->lengths[k]][1] = value;
map->lengths[k] *= 2;
}
uint64_t hmap_read(hmap_t* map, uint64_t key) {
uint64_t k = _hash(key);
for(uint64_t i = 0; i < HMAP_DEF_LEN; ++i) {
if(map->map[k][i][0] == key)
return map->map[k][i][1];
else if(map->map[k][i][0] == 0)
return 0;
}
return 0;
}
void hmap_free(hmap_t* map) {
for(uint64_t i = 0; i < HMAP_SIZE; ++i)
free(map->map[i]);
free(map->map);
free(map->lengths);
free(map);
}
Playing around with this code I was able to shave off some execution time by using a simple identity hash function in place of _hash(); ig we can get away with this since your keys are consecutive integers? Removing the redundant index in hmap_read() also helped some (and more than I'd expected; it's definitely faster than returning 0):
After giving it some thought I gave it a go using a fixed array for caching sequence lengths instead of a hash map, figuring we don't want any heap allocations (slow) and if we're mapping values to consecutive ints anyways, we might as well use an array. Even when the terms exceed what we're able to cache, it doesn't take too many iterations until the sequence "trickles down" to the point we can query the cache again. This is the whole program, and it's fast!
int _hash(uint64_t n) {
return (int)(n % HMAP_SIZE);
}
...
uint64_t hmap_read(const hmap_t* map, uint64_t key) {
const uint64_t k = _hash(key);
for(uint64_t i = 0; i < HMAP_DEF_LEN; ++i) {
const uint64_t curr_key = map->map[k][i][0];
if(curr_key == key)
return map->map[k][i][1];
else if(curr_key == 0)
return 0;
}
return 0;
}
After giving it some thought I gave it a go using a fixed array for caching sequence lengths instead of a hash map, figuring we don't want any heap allocations (slow) and if we're mapping values to consecutive ints anyways, we might as well use an array. Even when the terms exceed what we're able to cache, it doesn't take too many iterations until the sequence "trickles down" to the point we can query the cache again. This is the whole program, and it's fast!
#include <stdint.h>
#include <stdio.h>
#define SEARCH 1000000
int main() {
int cache[SEARCH] = {0}, lseq = 0, lseq_count = 0;
for (int i = 1; i < SEARCH; ++i) {
int count = 1;
for (int64_t j = i; j > 1;) {
if (j < SEARCH && cache[j] != 0) {
count += cache[j] - 1;
break;
}
j = j % 2 ? 3 * j + 1 : j / 2;
++count;
}
cache[i] = count;
if (count <= lseq_count)
continue;
lseq_count = count;
lseq = i;
}
printf("%d: %d\n", lseq, lseq_count);
return 0;
}
I once shot a man in Reno, just to watch him die.
while i was poking around with gdb during runtime (the main reason i added the TOLERANCE portion in my original program) it did seem inappropriate to use a hashmap for linear writes, though i originally supposed i might've been able to save both space and time using a cache smaller than the search breadth. upon consideration of your array implementation, though, i do think that wrapping the search breadth onto the size of the cache array and simply reassigning values into the array as the search continues (perhaps with a key-value pair struct instead of an int to keep track of the actual numbers) beyond the bounds of the array would allow for quick calculation even if the search breadth was increased by several orders of magnitude, as you're more likely to hit higher numbers in the chain first as the initial value increases which would roll the lower numbers into the counts as the sequence fans out
thank you for the code review and considerations _kp! i've gotten quite rusty not having written anything serious for years so it's very nice to have someone much more attuned to the discipline chime in
thank you for the code review and considerations _kp! i've gotten quite rusty not having written anything serious for years so it's very nice to have someone much more attuned to the discipline chime in