this post was submitted on 17 May 2024
14 points (100.0% liked)

Daily Maths Challenges

216 readers
1 users here now

Share your cool maths problems.



Complete a challenge:


Post a challenge:


Feel free to contribute to a series by DMing the OP, or start your own challenge series.

founded 10 months ago
MODERATORS
14
Bounding a function (lemmy.world)
submitted 10 months ago* (last edited 10 months ago) by zkfcfbzr@lemmy.world to c/dailymaths@lemmy.world
 

Consider the function defined by y = x^(sin(x)^sin(x)). Observe its graph. Find an increasing function which passes through each of its local maximums, and another increasing function which passes through each of its local minimums.

Extra credit: You'll notice the graph isn't drawn for x-values which make sin(x) negative. This is because most of those values make the function undefined - though it is defined for infinitely many points in those intervals, it just also has infinitely many holes. Since it lacks continuity here, it has no true local maxes or local mins, and doesn't impact the original problem. We can nonetheless cheat and fill in the holes by expanding the function to these regions with y = x^|sin(x)|^sin(x) (Using x^-|sin(x)|^sin(x) should also be technically valid, but is being ignored because it's discontinuous with the rest of the graph and not as pretty, but will be mentioned in my solution). Doing so adds more local maxes and local mins. The new local mins should line up with your function that finds the local maxes for the original function - but, find a new function which hits all of the new local maxes.

top 4 comments
sorted by: hot top controversial new old
[–] zkfcfbzr@lemmy.world 2 points 10 months ago

Background / incredibly minor spoilers:Unlike the other problems I've posted, this is the only one (so far) that I came up with on my own - back in high school while playing around with my calculator I entered this function at some point, and got curious about the minimums. I was in pre-calculus at the time, so it took me a while - but I did eventually work out the solution. Needless to say I was pretty happy the answer was something as cool as it was. I actually didn't ever work out (or try to work out) the extra credit portion until I wrote up the problem for this post, so it was a nice revisit of a very old problem.

The language asking for an "increasing" function is mostly to avoid smartasses who might submit the original equation as a technically correct solution. Basically I want the simplest possible function that passes through the described points.

[–] siriusmart@lemmy.world 2 points 10 months ago* (last edited 10 months ago) (1 children)
[–] zkfcfbzr@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

I've posted my solution.

The question of why the bounds swap isn't too relevant to the problem, since there are no maxes or mins below x = 1 - but the reason they swap is because numbers between 0 and 1 get larger with smaller powers, and smaller with larger powers. Think of how 0.5^2 = 0.25 - so the more a (positive) power is biased toward "rooting" a number in this interval (biased toward 0), the larger the result will be - so the minimum of g now produces a max value, and the max of g yields a minimum value.

Edit: If it helps, you can also think of numbers between 0 and 1 as being numbers larger than 1, but with a negative power. So something like 0.5^2 is equivalent to (2^-1)^2 = 2^-2. So larger powers become lesser powers that way.

Really, it's for the same reason the graphs of 2^x and 0.5^x are reflections of each other

[–] zkfcfbzr@lemmy.world 2 points 10 months ago* (last edited 10 months ago)

SolutionFor both of these, it's sufficient to consider only sin(x)^sin(x). It stands to reason that the maximums will happen when sin(x)^sin(x) is at its maximum value, and its minimums will happen when sin(x)^sin(x) is at its minimum value. The x^ part really just makes the function prettier to look at. The function lacks any continuity when sin(x) is negative, so we can consider just the regions where sin(x) is positive.

For the maximums, this is very easy: The largest sin(x) can be is 1, and so the largest value for sin(x)^sin(x) is also 1. So the monotonically increasing function which passes through each local maximum is y = x^1, or just y = x.

For the minimums it's a bit trickier: The smallest sin(x) can be is 0, and while 0^0 is indeterminate, it happens to approach 1 in this case - our minimum is somewhere else (side note: sin(x)^sin(x) approaching 1 for these values is why our max function also exactly bounds the flailing arms of the function). So we turn to derivatives. We can do this using either sin(x)^sin(x) or x^x - I chose to use sin(x)^sin(x) here and x^x (sort of) in the extra credit. Using sin(x)^sin(x) here has the added benefit of also showing the maximums are when sin(x) = 1.

y = sin(x)^sin(x)

y = e^(ln(sin(x)) * sin(x))

y' = sin(x)^sin(x) * (cos(x)/sin(x) * sin(x) + ln(sin(x)) * cos(x)) → Chain rule, product rule, and chain rule again

y' = sin(x)^sin(x) * (cos(x) + ln(sin(x)) * cos(x)) → Simplify

y' = sin(x)^sin(x) * cos(x) * (1 + ln(sin(x))) → Factor

We can set each of these factors to 0.

sin(x)^sin(x) will never be 0.

cos(x) will be 0 when x = π/2 + 2πn (2πn instead of πn because we're skipping over the solutions where sin(x) is negative). These are the maximums, because when x = π/2 + 2πn we have sin(x) = 1.

1 + ln(sin(x)) = 0

ln(sin(x)) = -1

sin(x) = e^-1 = 1/e

And that's actually sufficient for our purposes - we could of course say x = arcsin(1/e), but our goal is to calcualte the value of sin(x)^sin(x). Since sin(x) = 1/e, we have sin(x)^sin(x) = (1/e)^(1/e). So our monotonically increasing function which hits all the minimums is y = x^((1/e)^(1/e)), or: y = (e-th root of e)-th root of x.

Graphed solution visible here.

For the extra credit, we can do the same basic idea: Find the maximum of |sin(x)|^sin(x). Since we know we're in a region where sin(x) is negative, we can model this as x^-x, and the solution will be valid as long as x is something between 0 and 1, within sin(x)'s positive range.

y = x^-x = e^(ln(x) * -x)

y' = x^-x * (-1 - ln(x))

x^-x won't ever be 0, and -1 - ln(x) will be 0 when ln(x) = -1, or x = 1/e again, same as before. So our maximum value happens at (1/e)^(-1/e), which simplifies to e^(1/e). In other words, our new maximum function is x^(e^(1/e)), or x to the power of the e-th root of e. Graph of all three solution functions

And, finally - I mentioned that we should also be able to expand the function to its less-defined regions using y = x^-|sin(x)|^sin(x). I don't care enough to post the full work here, but the reciprocal of the three answer solutions above are what works here: y = 1/x, y = 1/x^((1/e)^(1/e)), and y = 1/x^(e^(1/e)). Graph. Note that the upper function here only applies in regions where the original function is already defined, and so the other parts of this extension are invalid - you can toggle the main function at the top off to see only the parts that truly apply.

Nothing particularly interesting happens if you extend things to values of x less than 0 - the graphs (more or less) just reflect across the y-axis.

Bonus graph with everything + phase shifts and pretty colors