Google hasn't been helpful because no such algorithm exists. Check out Rice's Theorem for the impossibility.

edit:

Let S be the set of languages that you can reduce SAT to in polynomial time.

SAT is clearly in S, and we know some machine recognizes it.

The empty language is not in S (even if P=NP, so that SAT is P-complete), and we know some machine recognizes it.

By Rice's Theorem, no machine decides, when given a machine as input, whether that machine recognizes a language in S.

(we assume that the "any custom problem" input is as a machine encoding)

edit2:

I see that you make a lot of questions about computational complexity, but do not have a good foundation. Many things you propose or ask already have known impossibility results. May I suggest you have a look at Sipser (https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X)? That will give you a better understanding of computability and complexity to understand the feedback you're getting.

>Go ahead and get your free copy of Avi Wigderson's Mathematics and Computation https://www.amazon.com/Introduction-Theory-Computation-Micha...

it's colloquial and has a lot of diagrams and "intuition". people really like it but i actually think hopcruft ullman is better because it's more structured

There's so much more to CS theory than big O notation and you're in for a treat if you check out any of the below - happy to share more if people are interested.

For anyone who'd want to know more, this is the textbook I used in a CS theory class where I first learned about the Chomsky connection: Introduction to the Theory of Computation by Michael Sipser (https://www.amazon.com/dp/113318779X/ref=rdr_ext_tmb).

(Edit: added the title and author name in my post)

I conjecture that hypercomputation is impossible for the following simple reason. If you could solve the Halting Problem [1], then you could build a "living" contradiction, where a machine halts if and only if it does not halt. (See proof in [2]). It won't be a mere description of a paradox, it would be an actual one. If contradictions are possible (something we implicitly believe to be impossible), then Reality is far stranger than we ever suspected or observed.

Google hasn't been helpful because no such algorithm exists. Check out Rice's Theorem for the impossibility.

edit:

Let S be the set of languages that you can reduce SAT to in polynomial time.

SAT is clearly in S, and we know some machine recognizes it.

The empty language is not in S (even if P=NP, so that SAT is P-complete), and we know some machine recognizes it.

By Rice's Theorem, no machine decides, when given a machine as input, whether that machine recognizes a language in S.

(we assume that the "any custom problem" input is as a machine encoding)

edit2:

I see that you make a lot of questions about computational complexity, but do not have a good foundation. Many things you propose or ask already have known impossibility results. May I suggest you have a look at Sipser (https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X)? That will give you a better understanding of computability and complexity to understand the feedback you're getting.

this is the canonical undergrad book

https://www.amazon.com/Introduction-Theory-Computation-Michael-Sipser/dp/113318779X

it's colloquial and has a lot of diagrams and "intuition". people really like it but i actually think hopcruft ullman is better because it's more structured

https://www.amazon.com/Introduction-Automata-Theory-Language...

1. https://www.amazon.com/Computational-Complexity-Approach-San...

2. https://www.amazon.com/Quantum-Computing-since-Democritus-Aa...

3. https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden...

4. https://www.amazon.com/Introduction-Theory-Computation-Micha...

(Edit: added the title and author name in my post)

https://www.amazon.com/Introduction-Theory-Computation-Micha...

[1] https://www.amazon.com/Introduction-Theory-Computation-Micha...