题解、Writeup、游记和碎碎念
Given a triplet , produce polynomials over such that:
Additionally, require that .
Among all solutions, find one that minimizes the maximum degree among .
Assume , and .
This is equivalent to finding , and without loss of generality we can assume .
Since has distinct roots, . Therefore, the minimized maximum degree over is at least .
Since is very large, a random degree- polynomial is irreducible with probability about , and splits completely (has distinct roots) with probability about .
Thus, our use of randomness should rely on irreducibility rather than on forcing distinct roots.
To make a degree- polynomial have fewer than distinct roots, we have two strategies:
If we explicitly construct the polynomial, the first approach is convenient. If we generate polynomials randomly, the second is more practical.
If and , we can do the following:
Pick distinct , and set , which has distinct roots.
Pick distinct , and set , which has distinct roots.
Let . If is irreducible, we have a valid solution for . Otherwise, repeat.
Here should behave similarly to a random polynomial, so this succeeds with probability on the order of .
If , pick a random , recursively solve the sub-instance to obtain , and then return , , .
There are three remaining cases with :
Finally, since does not yield a solution with maximum degree equal to , we must also handle specially. One approach: set , , and check whether has two distinct roots.
from pwn import *
from sage.all import *
from functools import lru_cache
context.log_level = 'debug'
r = remote('abc-conjecture.chal.crewc.tf', 1337, ssl=True)
r.recvuntil(b'p=')
p = int(r.recvline().strip().decode())
print(f"{p=}")
F = GF(p)
R = F['x']
x = R.gen()
@lru_cache()
def solve2(a, b, c):
if not (a <= b <= c):
if a > b:
B, A, C = solve2(b, a, c)
else:
A, C, B = solve2(a, c, b)
return A, B, C
if (a, b, c) == (1, 1, 2):
while True:
A = (x - randint(0, p - 1))**2
B = (x - randint(0, p - 1))
C = -A - B
if len(C.roots()) == 2:
return A, B, C
if (a, b, c) == (0, 0, 1):
while True:
C = (x - randint(0, p - 1))**2
A = R.irreducible_element(2)
B = -A - C
if B.is_irreducible():
return A, B, C
if (a, b, c) == (0, 1, 1):
return R(1), R(x + 1), R(-x - 2)
if (a, b, c) == (0, 0, 0):
return R(1), R(2), R(-3)
if a >= 1:
t = randint(0, p - 1)
A, B, C = solve2(a - 1, b - 1, c - 1)
return A * (x - t), B * (x - t), C * (x - t)
while True:
C = R(1)
for _ in range(c):
C *= x - randint(0, p - 1)
B = R(1)
for _ in range(b):
B *= x - randint(0, p - 1)
A = -C - B
if A.is_irreducible():
return A, B, C
def solve(a, b, c):
A, B, C = solve2(a, b, c)
return A, B, -C
while True:
r.recvuntil(b'Give me your solution for')
a, b, c = map(int, r.recvline().strip().decode().split(', '))
A, B, C = solve(a, b, c)
r.sendline(','.join(map(str, A.list())).encode())
r.sendline(','.join(map(str, B.list())).encode())
r.sendline(','.join(map(str, C.list())).encode())
Given numbers with and .
For a sequence , a non-empty subsequence specified by indices is called rotating if:
Note: may be 1. Indices matter.
Given , construct a sequence such that has exactly rotating subsequences, with the additional constraint that the length of is at most .
Let the sequence be .
Let be the number of rotating subsequences with last index and last value equal to .
Then for most , , while .
The final count is .
def calc(s, k):
f = [0] * k
for x in s:
f[x] += 1 + f[(x - 1) % k]
return sum(f)
What is the simplest case? Smaller should be easier, so consider (and we’ll soon see why it was excluded).
When , has two components; denote it by .
When a new element arrives, becomes either or . Initially it is .
Let , . Then the transition becomes or .
This shows a necessary condition for a pair to be reachable is .
It is also sufficient: the subtraction-based Euclidean algorithm applied to any will lead to , and this is essentially the unique reverse path.
Given , find such that and , then reconstruct the sequence via the subtraction-based Euclidean algorithm.
But which partition is optimal? Alternatively: if we want to reach large with as few steps as possible, what do optimal pairs look like?
Repeatedly applying yields Fibonacci-like growth, i.e., .
We can hypothesize that the best choice satisfies , where is the golden ratio.
Here is a script that follows this intuition:
from decimal import *
from math import gcd
import random
def compute_golden_ratio():
sqrt_5 = Decimal(5).sqrt()
phi = (sqrt_5 - Decimal(1)) / Decimal(2)
return phi
getcontext().prec = 500
phi = compute_golden_ratio()
def golden_round(x):
x_dec = Decimal(str(x))
result = x_dec * phi
return int(result)
def solve2(n):
t = n + 2
a = golden_round(t)
while gcd(a, t) != 1:
a += 1
b = t - a
s = []
while a > 1 or b > 1:
if a > b:
a -= b
s.append(0)
else:
b -= a
s.append(1)
print(a, b, len(s))
return s[::-1]
def calc(s, k):
f = [0] * k
for x in s:
f[x] += 1 + f[(x - 1)) % k]
return sum(f)
n = random.randint(0, 2**256)
assert calc(solve2(n), 2) == n
While this finds a solution for , it rarely fits the length bound of .
It performs much better than choosing a random partition (e.g., replace a=golden_round(t)
with a=random.randint(1,n)
), but it is still too long in many cases.
In fact, the minimal sequence length for given is described by OEIS A178047, which is hard to optimize against directly. This likely explains why is excluded in the challenge.
Similarly, write as a triple . Each step transforms it to one of:
It can be shown that is necessary and sufficient.
A natural guess is that the cyclic sequence maximizes the growth rate of . The limiting ratio or tends to the supergolden ratio .
Thus, we can partition with and target , .
However, we now have multiple possible reverse moves: e.g., or . Which is better?
Heuristic:
For larger , remains necessary and sufficient.
There is also a limiting ratio for successive components .
However, when choosing the next reverse step, there are up to options. We need to optimize the selection efficiently.
from math import gcd
import random
C = 512
def gcdall(s):
g = s[0]
for i in range(1, len(s)):
g = gcd(g, s[i])
if g == 1:
return 1
return g
class Solver:
def __init__(self, k):
self.r = random.Random(123)
self.k = k
# Approximate the growth ratios by iterating k*C steps
a = [1] * k
for i in range(k * C):
a[i % k] += a[(i - 1) % k]
self.a = a
self.sa = sum(a)
self.fr = (a[-1] << C) // a[-2]
def solve(self, n):
n += self.k
p = [n * x // self.sa for x in self.a]
for _ in range(n - sum(p)):
p[self.r.randint(0, self.k - 1)] += 1
while gcdall(p) != 1:
p[self.r.randint(0, self.k - 1)] += 1
p[self.r.randint(0, self.k - 1)] -= 1
assert sum(p) == n
# f stores the per-edge “distance” terms
f = [0] * self.k
for i in range(self.k - 1):
f[i] = abs((p[i] << C) // p[i + 1] - self.fr)
fsum = sum(f)
pmin = (p[0], 0)
# Compute change in fsum if we replace p[ci] with civ and track new minimum
def fsumdelta(i, ci, civ, npmin):
res = -f[i]
if (i + 1) % self.k == npmin[1]:
return res
i1 = (i + 1) % self.k
a = civ if i == ci else p[i]
b = civ if i1 == ci else p[i1]
return res + abs((a << C) // b - self.fr)
res = []
while max(p) > 1:
best = None
for i in range(self.k):
i1 = (i + 1) % self.k
if p[i] < p[i1]:
t = p[i1] - p[i]
npmin = pmin if t >= pmin[0] else (t, i1)
dset = {i, i1}
if t < pmin[0]:
dset.add((pmin[1] - 1) % self.k)
deltal = {x: fsumdelta(x, i1, t, npmin) for x in dset}
delta = sum(deltal.values())
if best is None or delta < best[0]:
best = (delta, i, i1, t, npmin, deltal)
fsum += best[0]
res.append(best[1])
p[best[2]] = best[3]
pmin = best[4]
for x, y in best[5].items():
f[x] += y
print(len(res))
return res[::-1]
if __name__ == '__main__':
from pwn import *
solvers = {}
for i in range(3, 103):
solvers[i] = Solver(i)
context.log_level = 'debug'
r = remote('rotating-subsequence.chal.crewc.tf', 1337, ssl=True)
for _ in range(50):
r.recvuntil(b'N=')
n = int(r.recvline().strip().decode())
r.recvuntil(b'K=')
k = int(r.recvline().strip().decode())
s = solvers[k].solve(n)
r.sendline(','.join(map(str, s)).encode())
r.interactive()
日期: 2025-09-22