This site is supported by donations to The OEIS Foundation.

User talk:Charles R Greathouse IV/Keywords/difficulty

From OeisWiki
Jump to: navigation, search

Hard vs. easy

If we ever succeed in building a quantum computer, Shor's quantum algorithm would make the prime factorization problem easy (factorization in polynomial time.) — Daniel Forgues 15:12, 9 May 2011 (UTC)

Or rather, a quantum computer with, say, a few thousand qubits—factoring 15 just isn't that impressive. I was actually looking for an example of a sequence that would have been considered hard that now would not be, thanks to advances in theory. But the example is good, I will add it.
Charles R Greathouse IV 15:19, 9 May 2011 (UTC)

You mention

This allows fast-growing sequences like A002416 to be classified as easy, even though generating a 10,000-term b-file might actually be time-consuming (it could fill a small hard drive.)

Maybe we could have keyword:costly or keyword:expensive. Or more specifically keyword:costly-time (keyword:expensive-time) and keyword:costly-space (keyword:expensive-space.)

keyword:expensive would apply to A002416. It would also apply to the sequence of primes (A000040) instead of keyword:hard since the Sieve of Eratosthenes is conceptually extremely simple ... — Daniel Forgues 05:06, 7 December 2011 (UTC)

I can think of few sequences for which "expensive" is less appropriate than the primes. The primes up to N can be generated in time , that is, time per element or per digit. That is, asymptotically, it takes less work to find the primes (in the lightly-coded Atkin-Bernstein format) than to list their digits...
For practical purposes, they're so cheap to compute that no one even stores tables of primes—we just regenerate them as appropriate. Every time I start a gp session I have it find the primes up to 108.
Of course A002416 is similar in that the whole cost is in display (and decimal conversion, if you want it in that form). It differs from the primes in that the terms grow large quickly. I could see a keyword:huge for such sequences, but conceptually I wouldn't call them expensive or costly.
In the far future I could imagine adding metadata to sequences to explain how costly they are to compute in terms of time, space, and other resources. But I don't think it's nearly as simple as you make it sound:
  • Should we express costly in terms of degree rather than a binary yes/no? Exponential/doubly-exponential/tetrational/non-elementary? Quadratic, subexponential, etc.?
  • What about lower and upper bounds? A sequence might be known to take exponential time to compute but the best-known algorithm may take doubly-exponential time.
  • What about resource tradeoffs? For many sequences I suspect the best possible time is fast and the best possible space is small, but the product of the two is not: if you have a terabyte of RAM you can compute lots of terms in a few minutes, and you can compute just as many terms with 1 kB of memory if you're willing to wait centuries, but if you have only (say) a gig of memory you'd need a lot of crunching.
  • What about other resources? There are practical ones like randomness, questionable ones like quantum entanglement, and purely theoretical ones like nondeterminism and alternation.
Charles R Greathouse IV 07:31, 7 December 2011 (UTC)

More with hard

I strongly suggest to have "more" with all "hard" sequences that only have a few terms. The keyword "more" is perhaps the primary key to search for interesting sequences to extend! - Joerg Arndt 08:53, 24 February 2015 (UTC)