Tomorrow Capital Research · Future Initiative

Tomorrow’s Conjectures

A proposed programme built around five grand, unsolved problems of algorithmic finance. Inspired by the Hilbert Problems and the Millennium Prize Problems, Tomorrow’s Conjectures invites solutions that are both mathematically rigorous and practically deployable.

Disclaimer: This page is a working mock-up for a future initiative. All content is provisional and should be treated as illustrative only.

5 core conjectures Monetary prize per solution Tomorrow Medal & formal recognition Publication & amplification of results
I. Why these problems?

Grand challenges for quantitative finance

Scientific progress has often been driven by ambitious lists of open problems. David Hilbert’s famous problems and the Clay Mathematics Institute’s Millennium Prize Problems focused entire communities on questions that, once answered, reshaped their fields.

Finance deserves its own version. Markets are complex, adaptive, non-stationary systems. Incremental tweaks to models and isolated backtests are not enough; we need clear, deep questions about the structure and limits of algorithmic markets themselves.

Tomorrow’s Conjectures is our proposed response: five precisely stated problems that sit at the intersection of quantitative finance, computation, and game theory. Each conjecture is designed to:

  • Expose a grand challenge in algorithmic trading or allocation
  • Admit a well-posed mathematical resolution
  • Lead to algorithms that can be deployed in actual markets
II. Programme Structure

What we plan to offer

For each conjecture, Tomorrow Capital Research intends to:

  • Publish a clear statement of the problem, including suggested metrics and minimal technical background.
  • Invite submissions from researchers, practitioners, and teams worldwide.
  • Reward successful resolutions with:
    • A monetary prize (amount to be announced)
    • The Tomorrow Medal, our highest honour
    • Active support in sharing and promoting the result across academic, open-source, and industry channels.
    • A piece of our company to share in the triumphs of your work being implemented to improve the world of finance

The spirit is simple: if you crack one of these problems in a way that holds up to serious scrutiny, we want the world to know it. And we want you to take a piece of our company and run with it.

III. Solution Standard

What a “solved” conjecture means

A conjecture is not considered solved just because a strategy backtests well. To qualify for an award, a submission must meet three criteria:

  • 1. Formal mathematics. A clear and rigorous resolution of the conjecture: a proof, a counterexample, or a corrected formulation with proof of the revised claim.
  • 2. Verifiable algorithm. A computationally tractable algorithm derived from the theory, specified in enough detail that others can implement and test it.
  • 3. Robust validation. Evidence that the algorithm generalises: out-of-sample tests, blinded data when possible, and diagnostics that go beyond a single data slice or market regime.

In short: theory we can check, code we can run, results that hold up.

IV. The Five Problems

Tomorrow’s Conjectures (high-level)

The full technical statements will live in a separate document. Here is a high-level overview of the five conjectures we propose to open:

Conjecture 1 — Dynamic reallocation between strategies under path switching

Stochastic Processes · Path Stitching · Strategy Reallocation

Consider a single risky asset with price process (St) observed at discrete times 0 = t0 < t1 < … < tN = T. We are given a family of models {ℳk} under which we can simulate Monte Carlo paths, and a finite collection of trading strategies {π¹, …, πK} (e.g. mean-reversion, trend-following).

  1. Initial regime (0 → T₁). At time t = 0, choose an initial model ℳ₁ and simulate a set of paths {X(1,i)t} on [0, T₁]. Using a chosen objective (e.g. risk-adjusted return), select a “best” strategy π(1) (for example, mean-reversion) and trade according to its position trajectory θ(1)t over [0, T₁]. Once realised prices on [0, T₁] are known, we can evaluate how well (ℳ₁, π(1)) fit the realised path.
  2. Event and model update at T₂. At some later time T₂ ∈ (T₁, T), a significant event or regime change occurs. The original model ℳ₁ and strategy π(1) are no longer believed to be optimal beyond T₂. We adopt a new model ℳ₂ and a new “target” strategy π(2) (for example, trend-following) with position trajectory θ(2)t on [T₂, T₃], where T₃ > T₂ is the time by which the transition to the new regime should be complete.
  3. Re-simulation in the new regime (T₂ → T₃). Starting from the realised price ST₂, simulate a new family of paths {X(2,j)t} on [T₂, T₃] under ℳ₂ and obtain the positions implied by π(2). The portfolio at T₂ currently holds the legacy position θ(1)T₂, but we want to move toward θ(2)t over [T₂, T₃], subject to frictions (transaction costs, liquidity, risk limits).

The core problem on [T₂, T₃] is to define a dynamically consistent transition from the old position to the new one. For instance, introducing a control process wt ∈ [0, 1] representing the fraction of capital allocated to the new regime, the realised position at time t could be written as

θt = (1 − wt) · θ(1)t + wt · θ(2)t,   t ∈ [T₂, T₃].

We then seek a rule for wt (or, in discrete time, for allocations at t₁, t₂, …, tn between T₂ and T₃) that trades off:

  • closeness to the new “optimal” path θ(2)t,
  • trading costs and turnover, and
  • risk and capital constraints along the entire transition.

Challenge. Given a sequence of model/strategy pairs (ℳ₁, π(1)), (ℳ₂, π(2)), … and simulated paths plus realised prices at update times 0 < T₁ < T₂ < T₃ < …, design either:

  1. A mathematical framework (loss functional, cost functional, and a notion of “shortest” or most coherent adjustment between position paths), or
  2. An explicit algorithm that, at each update time Tk, outputs a dynamically consistent sequence of positions θt₁, …, θtn or allocations between strategies on [Tk, Tk+1],

describing how capital should be reallocated from the old regime to the new one in a way that is path-wise coherent, cost-efficient, and robust across Monte Carlo scenarios.

Conjecture 2 — The Central Pricing Problem

Pricing · Market Microstructure · Mean-Field Effects

We extend the Central Price Problem from a single asset to a system of N > 3 interacting assets.

Setup

Consider N > 3 generalised assets with stochastic price paths {P(i)t}, i = 1, …, N, observed at discrete times 0 = t0 < t1 < … < tT, with prices known for all t ≤ T and unknown for t > T.

For concreteness, one may think of an underlying asset A, its option A* and swap A ̃; another underlying B with B* and B ̃; and more generally underlying assets A, B, C, D, … with associated derivatives A*, A ̃, B*, B ̃ and so on.

  1. Immediate vs historical dependence. The one-step-ahead price of each asset admits a decomposition P(i)T+1 as a weighted combination of an “immediate impact” term and a “historical dynamics” term. The relative weight α(i)T ∈ [0, 1] may vary over time, but the weights always sum to one.
  2. Historical dynamics. The historical term may depend on past prices {P(j)t} for t ≤ T, cross-asset and cross-derivative correlations, order flow, liquidity, volatility regimes, fundamentals, and longer-horizon features such as drift and mean reversion.
  3. Immediate impact and behavioural effects. The immediate impact term captures order-book mechanics and behavioural/flow effects, such as large buy orders pushing prices upward, short-horizon supply–demand imbalances, and stylised mean-reversion patterns around local extremes, while still allowing for a persistent long-horizon drift.
  4. Information set. The modeller has access to (i) the full joint price and volume history of all N assets up to time T, (ii) the financial statements and fundamentals of the corresponding issuers, and (iii) the aggregate positions of other market participants, so that pricing must be treated as a mean-field problem rather than a single-agent one.
  5. Own-price impact. Your orders move the mid-price according to a simple impact rule, for example: new_mid = current_mid + (your_price - current_mid) * sqrt(order_volume) . This is a toy model; researchers are free to propose more realistic impact specifications.

Statement of the Conjecture

Under the multi-asset, mean-field setting above, the conjecture states that there exists:

  1. Short-horizon central price estimator. A statistically significant and practically accurate formula (or class of models) for the “central” or “immediate” price of each asset at time T + 1, constructed from observable market and participant features at or before time T.
  2. Multi-step confidence regions. A statistical procedure that produces confidence intervals (or more general confidence sets) for the joint price paths {P(i)t} over t = T + 1, …, T + H, with H ≥ 5 and i = 1, …, N, achieving empirically valid coverage probabilities (“reasonable accuracy”) under realistic market conditions.

In other words, there should exist a non-trivial, empirically testable mapping from historical dynamics (prices, volumes, correlations, regimes, fundamentals) and immediate impact (order-book state, flows, participant positions, your own trades) to short-term central prices and medium- horizon price ranges for a system of interacting assets.

Research Note

Special interest is reserved for solutions that embed this multi-asset formulation into a unified pricing framework that simultaneously covers:

  • the market-maker’s very short-horizon pricing problem,
  • the high-frequency trading pricing problem, and
  • the medium- to long-term asset pricing problem,

under a common set of assumptions linking microstructure, participant behaviour and longer-term price formation.

Conjecture 3 — Stealth-Optimal Execution Distributions

Execution · Market Impact · Stealth

Motivation

This conjecture has already attracted special interest from a couple of other founders.

The underlying problem is simple to state. Given an algorithmic trading strategy with strong returns, it is almost inevitable that capital will accumulate. As capital scales, order sizes and average position sizes grow — and firms consistently observe that returns deteriorate. A large part of this degradation comes from market impact and loss of execution stealth.

We are therefore looking at two sides of the same coin:

  • Market impact: how much our orders move prices against us.
  • Execution stealth: how detectable our trading footprint and intent become to other participants.

Big, obvious orders (for example a single multi-million buy sweep) tend to:

  • push the order book and short-term imbalance in our disfavoured direction, and
  • leave a visible footprint that other firms can learn from over time.

This can lead to alpha decay: competitors infer the structure of the strategy, trade ahead of it, or “vulture” around it with sandwich-like behaviour. Execution design is therefore not a side quest — it sits alongside inventory control, risk management, and alpha discovery as a core problem.

This conjecture asks for a principled way to design stealth-optimal execution distributions: how to spread a given order across time, size, and accounts to maximise stealth and minimise slippage / cost.

Informal problem statement

Suppose an algorithmic trading signal (alpha) fires and prescribes a net position change of notional size Q (e.g. Q = 1,000,000 USD) to be executed over an acceptable time horizon T = [0, τ], from signal time t = 0 to a hard time limit τ.

We want to determine:

A joint distribution over execution times and order sizes (and, optionally, accounts) that maximises stealth and minimises slippage / cost over the horizon T.

The order flow here is not the entire portfolio inventory, but the total notional associated with this specific signal. The strategy may trigger repeatedly over time, either to build a position into an attractive opportunity or to gradually unwind into a profit-taking region.

Modelling set-up

We can model an execution schedule in either of two ways:

  • Discrete trades: a sequence of trades {(tk, qk)} with 0 ≤ tk ≤ τ and ∑ qk = Q.
  • Continuous order-size density: a function ϕ(t) on [0, τ] such that 0τ ϕ(t) dt = Q.

We assume, at minimum, the following microstructure ingredients:

  1. Impact / stealth trade-off (square-root visibility).
    The “loss of stealth” or footprint associated with a trade of size q grows approximately as a square-root law, footprint(q) ∝ √|q|. Executing the full notional Q at t = 0 effectively destroys stealth; executing tiny slices (e.g. 10 USD) changes stealth only negligibly.
  2. Account-splitting constraint.
    The trader may split Q across at most Nmax = 100 accounts, Q = ∑i=1N Qi with N ≤ Nmax, where each account i follows its own execution schedule. This rules out the degenerate “infinitely many infinitesimal accounts” solution.
  3. Execution cost / slippage.
    For simplicity, each individual trade incurs:
    • a base cost modelled as a fixed k% rate (covering immediate slippage and explicit fees), and
    • a market impact component consistent with the square-root visibility law above.
    Optionally, transaction costs may shrink by a factor c% as order size increases, capturing fee discounts for larger tickets.
  4. Cross-impact and opposite flow.
    One may allow for inverse impact or partial cancellation when buy and sell orders of similar size interact in opposite directions over short horizons.
  5. Market “memory” / recycle rate.
    The market (and counterparties) can be assumed to have a finite memory horizon: after some time, or after recycling accounts, previous order flow becomes less informative. For example, one could assume:
    • up to 100 “fresh” accounts can be created per month, and
    • each new account starts with a “stealth bonus” that decays with its cumulative activity.

Conjecture — Stealth-Optimal Execution Distributions

Under a reasonable microstructure model incorporating:

  • a square-root–type impact / detectability relationship in trade size,
  • optional cross-impact between opposing flows,
  • account-splitting constraints N ≤ 100, and
  • size-dependent transaction costs,

there exists an optimal joint distribution of:

  • execution times,
  • trade sizes, and
  • account splits (up to 100 accounts),

that:

  • maximises stealth — in the sense of minimising information leakage or detectability of the underlying strategy; and
  • minimises expected slippage and total execution cost relative to the alpha’s target price path,

subject to:

  • t ∈ [0, τ],
  • ∑ qk = Q (or 0τ ϕ(t) dt = Q),
  • N ≤ 100.

Challenge

The challenge is to:

  1. Formulate a precise optimisation problem over time–size–account distributions, specifying:
    • a stealth / detectability functional,
    • an impact and cost model, and
    • any market-memory or recycle-rate dynamics.
  2. Prove existence and (where possible) uniqueness of stealth-optimal execution distributions under this model.
  3. Design explicit algorithms which, given an alpha signal, a total notional Q, and a horizon τ, produce a stealth-optimal execution schedule, including:
    • how to split Q across up to 100 accounts, and
    • how to allocate each account’s flow over time.

Conjecture 4 — Existence of Optimal Strategies in the Full Multi-Layer Market Model

Hybrid geometry · Mean-field games · Optimal control

We model financial markets as a coupled, multi-layer system: a time-varying market/asset graph and value surface, a network of interacting agents with utilities and information links, a hierarchy of rules governing dynamics, and (in the most general version) a geometric / hybrid frame where agents move on a manifold with continuous flows and discrete jumps.

In a simplified, finite, fully discrete version of this environment, the decision problem of a fully-informed agent reduces to a Markov Decision Process, and classical Bellman theory guarantees existence of an optimal policy, computable by dynamic programming. The conjecture asks whether an analogous guarantee holds in the full geometric–informational, hybrid, multi-agent setting, where the state is infinite-dimensional (fields + agents), time is hybrid (flows + jumps), and interactions are game-theoretic / mean-field rather than a single MDP.

Informal statement

Can we prove that, in the full Tomorrow Markets framework, there exists at least one optimal strategy (or equilibrium strategy profile) for agents, and characterize it in a reasonably “simple” class (Markov, feedback, or geodesic / variational policies)?

In other words: once you accept the market model as a hybrid geometric–informational multi-agent system, is it mathematically guaranteed that there is a best possible way to trade (or a best equilibrium configuration of strategies) consistent with the rules of the world?

Formal version (single optimal agent)

Let Xt denote the global state (fields, graphs, agents) evolving as a hybrid dynamical system with flows and jumps on X = M × S, where M is a (possibly time-varying) manifold and S indexes discrete modes / frames. A distinguished “Tomorrow” agent observes Xt and chooses controls at ∈ A(Xt).

The agent’s monetary performance under policy π is

J(π) = E[ ∑t=0T γt ( R(Xt, π(Xt)) - L(Xt, π(Xt)) ) ],

for 0 < γ ≤ 1, with reward function R and loss function L defined by the full market model.

Conjecture (Existence of optimal strategy). Under natural regularity conditions on:

  • the hybrid state dynamics (well-posed flows and jump maps on M × S),
  • the action sets A(x) (compact, measurable),
  • and the reward / loss structure (bounded or suitably integrable, Markov in (Xt, at)),

there exists an optimal admissible policy π* such that

J(π*) = ⊃π J(π),

and π* can be chosen from a structured class, for example:

  • a Markov / feedback policy π*(x) depending only on the current state, or
  • a variational / geometric policy that realizes a least-action trajectory on the manifold (solution of an HJB / Euler–Lagrange system associated with the full model).

A stronger version asks for constructive existence: that π* can be obtained as the limit of a convergent sequence of algorithms (e.g. value iteration, policy iteration, or actor–critic methods) implemented within the same market simulator.

Multi-agent / equilibrium extension

In the many-agent setting, each agent i has its own objective Ji1, …, πN) defined over the same hybrid geometric field, with coupling via prices, fields, and information graphs.

Extended Conjecture (Existence of geometric–informational equilibrium).
For the full multi-agent model (or for its mean-field limit), under suitable compactness / continuity / monotonicity assumptions, there exists at least one equilibrium strategy profile:

  • a Nash equilibrium in feedback strategies, or
  • a mean-field game equilibrium (solution of a coupled HJB–Fokker–Planck system on the manifold),

compatible with the hybrid dynamics and information network.

What counts as a solution

Any of the following would constitute a serious solution to this conjecture:

  • An existence theorem (single agent): precise assumptions + proof that an optimal feedback policy π* exists for the full hybrid geometric–informational model (beyond the finite, discretized MDP case).
  • A constructive characterization: a scheme (e.g. an HJB on M × S or a convergent dynamic-programming / RL algorithm) that provably converges to π*.
  • An equilibrium existence result (multi-agent / mean-field): proof that, in the interacting-agent version, at least one Nash / mean-field equilibrium exists in feedback / geometric strategies.

Bonus: Beyond proving that an optimal strategy exists, explicit constructions, structural characterisations (e.g. geodesic, layered across time-scales), or practically implementable algorithms that find it will be given special recognition.

Conjecture 5 — The Final Problem

Portfolio · Optimisation · Multiscale

Modern financial firms are built as hierarchies of decision makers: firm → portfolio → strategy → execution. Each layer has its own objectives, constraints and key performance indicators, yet all of them act on the same underlying market and the same pool of capital.

Fix a single firm-wide portfolio A, with total capital V, trading over a universe of assets with associated features (prices, returns, volatilities, liquidity, funding, fundamentals and microstructure attributes). The portfolio A is decomposed into internal components:

  • strategy pods and sub-portfolios,
  • risk and hedging books,
  • execution and routing policies,
  • and any further sub-components (desks, regions, time-horizons).

Each component X in this hierarchy:

  • controls a local state (positions, risk exposures, cash, inventory),
  • faces its own constraints (risk limits, turnover, liquidity, leverage, drawdown, stealth, capital budget),
  • and optimises a local objective functional (e.g. Sharpe, PnL, risk-adjusted return, tracking error, execution cost).

At the same time, the firm as a whole optimises a global objective functional over the full configuration of all components: long-horizon risk–return, drawdown and survival probabilities, capital efficiency, and systemic risk.

Problem.
Specify a mathematical framework (objective, constraints and decision rules) that, given:

  • the hierarchical decomposition of the firm (components and sub-components),
  • the feature set of all traded assets and strategies, and
  • the evolving market state and information flow,

produces a configuration of decisions (capital allocations, positions, risk limits and execution policies) that is simultaneously:

  • globally optimal for the firm-wide objective (risk–return, survival, capital efficiency), and
  • locally optimal for every component and sub-component, given its constraints and its local view of the environment.

In other words, the same capital and information must support a configuration where the whole portfolio A is optimal inside the market, and each internal piece of A is optimal inside its own local neighbourhood of constraints and interactions.

The conjecture.
There exists a principled, implementable multiscale optimisation principle — a single rule, or variational framework — that maps the full state of the market and the firm (prices, returns, risks, liquidity, fundamentals, correlations and hierarchical structure) into such a configuration of decisions, and that:

  • applies uniformly across layers (firm, portfolio, strategy, execution),
  • respects all local constraints and risk budgets, and
  • yields configurations that are stable under feedback from market dynamics and from the actions of other agents.

The challenge is to write down this multiscale principle explicitly and to show that, under realistic assumptions on markets and constraints, its solutions are indeed simultaneously globally and locally optimal in the above sense.

V. Adjudication

How solutions would be evaluated

To keep the bar high and the process credible, Tomorrow Capital Research plans to appoint an independent Scientific Advisory Board (SAB) of researchers and practitioners in mathematics, computer science, and quantitative finance.

Submissions would be assessed on four dimensions:

  • Applicability & relevance – potential to change how capital is allocated, risks are managed, or markets are understood.
  • Innovation & novelty – genuinely new ideas, models, or algorithmic architectures.
  • Accuracy & completeness – mathematical rigor, empirical robustness, and clarity of assumptions.
  • Presentation & clarity – how clearly the work is written, specified, and reproducible.

Final confirmation of any prize would depend on the SAB verifying that the solution truly resolves the conjecture and meets these standards.

VI. Current Status

Not yet live — concept only

This page is a concept preview. The conjectures, submission rules, and prize amounts are still being refined and discussed with advisors and potential partners.

Once the programme is ready to launch, this page will be updated with final statements of each conjecture, detailed technical documents, and the formal submission process.

Until then, feel free to treat Tomorrow’s Conjectures as an open set of ideas: problems you might already be working on, directions to explore, or challenges to debate with colleagues.