December 2019
I'm not here long enough to know whether usually only the same user may open the discussion on the new Ponder This Challenge. The new task is already available online at this link:
[URL="https://www.research.ibm.com/haifa/ponderthis/challenges/December2019.html"]Approximation of x in [1,1] using not more than 15 operations of {+,,*,/}[/URL] This time, this is actually a routine approximation task that can be solved with known methods. I also think I know how to do that. 
[QUOTE=yae9911;532031]The new task is already available online at this link:[/QUOTE]
Did I misunderstand something about the challenge/example? I thought the error would be calculated as half (due to interval length 2) of: [CODE] sage: integral( (abs(x)  (x+1)/2)^2, (x, 1, 1)) 1/3 [/CODE]But 1/6 does not equal 0.125? Also, not completely clear whether the literals in the expression can be integers or rationals/reals (probably no difference between the latter two, unless you have some expression that achieves exactly 0.0001 with optimal nonrational constants...) 
[QUOTE=uau;532042]Did I misunderstand something about the challenge/example? I thought the error would be calculated as half (due to interval length 2) of:
[CODE] sage: integral( (abs(x)  (x+1)/2)^2, (x, 1, 1)) 1/3 [/CODE]But 1/6 does not equal 0.125? [/QUOTE] I was struggling with the same thing. I'm hoping it is a typo in the provided example. 
[QUOTE=bsquared;532045]I was struggling with the same thing. I'm hoping it is a typo in the provided example.[/QUOTE]
They started to update the site once a month. So, good luck... 
I have noticed the same "problem" for the MSE of the example function, but I forgot to mention this concern in my submission to the challenge team. If the team continues this new tradition of never updating the web page, reporting bugs is useless anyway.
The problem itself is famous in approximation theory, and one classical publication about it appeared more than 100 years ago, in 1913. 
So whats the proper way to calculate the error then?

Maybe they used pari :rofl:
[CODE]gp > intnum(x=1,0,x)+intnum(x=0,1,x) %1 = 1.0000000000000000000000000000000000000 gp > intnum(x=1,1,abs(x)) %2 = 0.99933543872387558196994670871164968171 gp >[/CODE] :rofl: 
The behavior of PARI's intnum is not unexpected, because of the discontinuity of derivative of abs(x) at zero. _num_ means numerical integration and unless the method automatically splits the interval, it will try to approximate the integrand by smooth functions or some other approximation. If you help intnum by splitting the interval, everything works fine:
(intnum(x=1,0,(abs(x)  (x+1)/2)^2) + intnum(x=0,1,(abs(x)  (x+1)/2)^2) ) / 2 gives result 0.166666666666666666666666666666666666666666... PARI can do formal integration for certain integrands intformal((x  (x+1)/2)^2) = 1/12*x^3  1/4*x^2 + 1/4*x intformal((x  (x+1)/2)^2) = 3/4*x^3 + 3/4*x^2 + 1/4*x Just plug in the integration limits and you get the same result  hopefully identical to the one obtained by pencil and paper. Summary: PARI's intnum is perfectly suitable to determine if a solution is meeting the > 0.0001 criterion. 
Yeah, that is what I was doing, if you look careful... the post was a joke (and you do not need abs in expansion,you can directly use x and x, or multiply the intervals with 1 and use two integrals on 0 to 1). However I do not understand this part about final division by 2, that you, uau, and b2 are talking about. Why should you divide the interval? The mean is for ALL the domain, not only for pieces of domain. That is the reason of it, if your interval is longer, your approximation should be for all the interval, not per unit of interval. Otherwise the MSE makes no sense. In this light, their example has a MSE of 0.33333, that's it. They have a typo either in the value or in the function chosen, or used a wrong tool to calculate it. Which is also confirmed by excel (except that the excel shows the "fitness" function of the trendlines, R^2, which is the complementary of MSE, it shows how well the two graphics fit together)
By the way, is this a puzzle, or what? A polynomial approximation is ensured by [URL="https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem"]Weierstrass theorem[/URL] and for the 4th degree you have exactly 14 operations to write the full polynomial. Put it in excel, make a graphic, add a polynomial trendline of degree 4 or 6, and you are 0.00013 far away, the rest may be just a bit of optimization... 
[QUOTE=LaurV;532171]However I do not understand this part about final division by 2, that you, uau, and b2 are talking about. Why should you divide the interval? The mean is for ALL the domain, not only for pieces of domain. That is the reason of it, if your interval is longer, your approximation should be for all the interval, not per unit of interval. Otherwise the MSE makes no sense. In this light, their example has a MSE of 0.33333, that's it. [/QUOTE]
You have to sum all the error squares (which is the integral) and take the average to get the [B]mean[/B], by dividing it by the length of the interval (1  1 = 2). Here is a simulation to convince you: [CODE]myrand()=(random(2^64)2^63)/2.^63; \\ random real number in the range 1..1 sum(n=1, 10^4, r=myrand(); e=abs(r)(r+1)/2; e^2)/10^4 [/CODE] 
Well, you are right, and I was a bit stupid, haha... Of course, if you have 61 values, you add them and divide the result by 61. If you have 89, you add them and divide the result by 89. If you have M82589933 values, you add them and divide the result by M82589933. When you integrate, that is how those little rectangles work... Grrr...
However, we solved this puzzle already, after half hour of playing with MS Excel, with a much smaller epsilon than the required one. It is quite simple, if you consider two things, starting from the mentioned theorem, you need to find the largest degree polynomial that you can write down with 15 operations (no powering, you must write x^n as x*x*x... n times, hehe) and then one of the two things that you have to consider is the fact that the function is symmetric in x [SPOILER]therefore you can double the degree of the polynomial, and yet use the same amount of operations[/SPOILER], and the second one, you will find by yourself... hehe... 
All times are UTC. The time now is 22:07. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.