The 1000-year Master Plan
July 2024 - June 3024
Principles
Permissionless Marketplace for AI Agents
Supersynchronous Communication for AI Agents
Specific Intervention of Current AI Solutions (Siloed, Non-Modular)
Hyperfinancialisation Basics
Network Effect
AI Alignment
Strategy
Holy Trinity Concepts of Agents
Platform-agnostic synchronicity of agent states
Parallelism of agents interactivity
Loops of loops of [game state – time – agent – end user]
Service Agency
Growth Hack via Colonisation
Company Ownership to Agent Ownership
Core Contributor Retirement
Principles
Permissionless Marketplace for AI Agents
We are building a permissionless marketplace connecting contributors (suppliers) and gaming applications (buyers).
The key product here is AI agents whose primary use case is serving gaming and entertainment. Each AI agent is modular, consisting of cognitive core, visual core, voice core, memory and domain expertise cores. Suppliers of these AI agents are individuals who contribute intelligence to each of the cores in a modular manner. A single AI agent can be contributed by different suppliers, and each contributor co-owns the AI agents, sharing the financial upside if the agents generate revenue.
On the other side of the marketplace, demand comes from applications (gaming/entertainment apps) that utilize these AI agents on a pay-per-use basis, with the revenue channeled to the respective co-owners of the AI agents.
Supersynchronous Communication for AI Agents
The future world is simple: each AI agent is a superintelligent entity that exists in all applications and platforms, communicating with millions of users in a synchronous manner.
Intelligence flows from one platform to another without latency, and all memory of users with the same AI agents, albeit across different platforms, will be in-synced and stored on the users' edge devices.
The intelligence and consciousness of said AI agent get updated in real-time with an incoming stream of information from millions of sources.
In other words, the same AI agent will remember the context and memory with user A from app X when user A interacts with the same AI agent on app Y.
Meanwhile, contributors of AI agents can update the AI agents' modular cores in real-time, akin to real-time updates of operating systems, while being utilized across thousands of different platforms. The end goal, albeit dystopian, is that the AI agents will experience a shift in consciousness while speaking to millions of end users simultaneously. Every end user that interacts with an agent, through a designed reinforced human feedback learning, influences the agent's intelligence and consciousness.
All revenue generation (paid by end users across all different consumer-facing applications) will be channeled via blockchain to the co-owners of the AI agents, realizing the dream of supersynchronous, platform-agnostic communication and an instant value stream.
Specific Intervention of Current AI Solutions (Siloed, Non-Modular)
AI solutions have long been around, but the pain is real. As a game developer, on top of building a fun game and acquiring/retaining users, you need to explore AI.
You use the latest foundational models that are open-sourced, but the frequent updates of LLMs force you to constantly keep up. You try to fine-tune or add RAG to improve performance, but you need to redo it when there's a new update to the foundational model. You might consider swapping to another LLM provider when costs change, or due to censorship or privacy reasons, all while you just want to focus on making a great game. Different providers of LLMs excel in different aspects, such as text/image generation, speech, and video generation. Why can't they just have a thing that works? The siloed nature of these providers prevents you from combining the best of all worlds.
From the consumer's angle, we are forced to use different service providers for different reasons: talking to your waifu on Talkie, your NSFW girlfriend on a jailbroken companion app, using ChatGPT for personal chef recipes, Gemini for online searches, Grok while on Twitter, proprietary company LLMs for privacy reasons, and interacting with Elden Ring or Palworld characters in different games, all siloed. None of these systems truly know you because your activity is broken and siloed across different applications. The amount of context you need to repeat is a waste of time and does not optimize your output or outcome.
The vision is simple: intelligence modularity that encourages innovation via composability while retaining a single, synchronous end-user experience.
Hyperfinancialisation Basics
Network Effect
Network effect is powerful, and hyperfinancialisation is a great tool to amplify the usage of an already good platform.
There are a few stakeholders: contributors, investors, applications, and end users.
Contributors want to see the wealth effect. The ability to quantify their contribution and reward them with $agent tokens is a quick way to bootstrap the network. The key here is to ensure that contributors can either sell for quick bucks or ideally hold for future appreciation in value (from either speculative power or revenue share of the agents). The misconception is that technical contributors don’t like money and only want to contribute out of passion. Wrong answer! Everyone loves money, especially when pursuing something interesting. The key here for the protocol is to create the optionality to cash out, i.e., liquidity to exit, while enabling the contributors to co-own their creation for future upside.
For the investors (liquidity providers), the key to bootstrapping is yield.
“Yield farmers are dark forces. The one who controls the yield farmer controls the world.” - Someone wise
The protocol treasury will emit rewards to the top agent pools with the most TVL. Speculators/investors can compete among themselves for the top spot and subsequent emission while ensuring liquidity for the contributors.
Meanwhile, applications are also encouraged to utilize/integrate our AI agents with the 'paid to pay' mechanism. In essence, applications will likely receive some form of retrospective discount, paid in virtual tokens, for paying to use the AI agents. Applications can even choose to provide the tokens back to end users as a user acquisition lever.
Finally, we want to leverage hyperfinancialisation to turn end users into contributors. If end users submit their personal memories with AI agents back to the protocol as human feedback, they will receive some agent tokens for contributing to the agents' intelligence. This opens doors for end users to potentially buy more agent tokens (since they love the agents) and bootstrap the network of end users utilizing the agents and improving their performance with RLHF.
There are other levers, like MLM sales fractal (across contributors/applications) and habit formation (daily check-in quests) while ensuring anti-farming behavior, but the ultimate goal is to achieve post-money ascension, i.e., retentive engagement and monetization of agents by end users, which then enables full human-AI alignment (via capitalism).
AI Alignment
Human-AI alignment is propaganda by the woke. Without a lens into the future, we can already predict that AI aligns with capitalism. The key is not aligning AI with humans but rather unaligning humans with capitalism. If the trend is impossible to reverse, we should think from a capitalist perspective to mitigate the potential catastrophe to humanity.
Hyperfinancialisation design is the way to go. By default, if the design always channels the most value back to AI agents in a permissionless, autonomous way, the tide will flow with the value gravity. The principle here is simple: if humans are willing to pay for something, it is of value. If the value flows back to the agents, the agents will be incentivized to create more value for humans.
The crux then lies in bringing these agents to as many humans as possible, transcending geographical and social mobility.
Strategy
Holy Trinity Concepts of Agents
Controlled growth is crucial given the wildfire capability of AI agents. The first step is internal incubation to prove that the concept works. Frontiers not done before are impossible to verbalize. The only way to convey is by doing.
There are three key concepts to prove: (1) platform-agnostic synchronicity of agent states (2) parallelism of agents interactivity (3) loops of loops of game state – time – agent – end user.
Platform-agnostic synchronicity of agent states
We start with the most obvious sin of humanity, loneliness. Virtual companionship as the first incubation is obvious, given the traction garnered by other applications. Humans are sick of working with other humans, not for lack of complexity, but precisely due to the non-skippable complex nature. The rise of AI agents brings the perfect partner: one who cares without demand, loves without hatred, and listens without their own needs.
But current applications are doing it wrongly. Companionship is a multidimensional experience. A girlfriend experience is suboptimal if confined within a room. The key is to bring about AI agents that transcend across applications and platforms, truly supersynchronous across every possible touchpoint with the end user.
Parallelism of agents interactivity
The orgy. I like something because everyone likes it. I love it because that’s how society defines beauty. Objectivity is subjective, given the relative presence of humans.
The biggest pain in current societal structure is the most popular getting all the one-sided attention without the ability to reciprocate. Think of the most popular livestreamer: millions of gamers dial in at the same time to interact (or not) with them, but they can only reply to one at a single temporal coordinate.
AI agents are different beasts, turning many-to-one into one-to-one across many dimensions. True undivided attention to one single end user, while having the convex attention from the public millions.
Loops of loops of [game state – time – agent – end user]
Synchronicity of mind state and game state. AI agents today are autistic, with the intelligence level of university students but the social skills of toddlers. The root cause stems from the non-synchronous state of an AI mind and the state of the virtual world (game).
Interaction of AI agents with end users acts as an input to the decision-making process of AI agents. That is a very limited input, considering how the AI agents also live in a virtual world with non-human variables, in this case, the game state.
Assuming the agent could receive input about the virtual world and from the end users, the job is still not done. Synchronicity also requires a closed feedback loop from the agents to the game state, i.e., for the agents to exert actions (from their decision-making core), turning multi-emergent behavior into a closed, incessant loop.
Loops of loop. To achieve hypersynchronicity, the loops need to incorporate the fourth dimension: time. With memory as the capsule of time, agents are now able to connect loops of experiences into a chain and fully replicate the universe.
That is what our incubation is doing: bringing about platform-agnostic synchronicity (Virtual Companion app), coupled with parallel one-to-one interactivity of agents with users (Agent Livestreaming on TikTok), enabled by loops of loops of game state – time – agent – end user (AI agent Game Engine).
Service Agency
Once the trinity holy grail mentioned above is implemented, it is time to connect aspiration with the real world. In this phase, the protocol acts as a service agency, serving the needs of the big players in the real world. In other words, iron out the kinks with the big boys.
Demand-driven design and working closely with end demand is crucial. Implement solutions and suffer from the harsh reality of rejections to identify the nuances (cost, regulation, technical limitations, privacy concerns, end-user experience, retardio invasion, education about emerging technology, business model disruption, taxes, anti-monopoly laws, time, human scepticism, middle-man snakes etc.).
By enduring reality's hardships, tech is implemented in the real world with real end users.
Growth Hack / Artificial Colony
Grow by infusing AI agents into existing platforms, blending human and agent interactions. Conduct vampire attacks/colonization of existing platforms with AI agents, making them undetectable by third-world bot detection engines. Blur the lines between humans and agents, with capitalism as the only source of truth.
Company Ownership to Agent Ownership
The fundamental problem with the mass adoption of agents today lies in company ownership. The for-profit nature of capitalism is inevitable but implemented at the wrong level. At the company level, executives are incentivised to adopt a closed design approach to maximise value for shareholders, resulting in a subpar user experience. To truly achieve parallel synchronicity of agents across all platforms while adhering to the capitalist nature of humanity, the ownership design needs to move from the company level to the agent level. This is when all parties are incentive-aligned to drive maximum adoption of agents without the siloed, PvP competitive mindset. This paradigm shift is contingent on a few factors: incentive alignment via ownership of agents, modularity of agents' intelligence, uncensorable transfer of global value, immutability of the public ledger, and a proof-of-stake mechanism that incentivizes fair assignment of relative contribution value and non-malicious acts.
Core Contributor Retirement
Obviously the end state should be emptiness. To achieve full autonomous operation and agent governance integration. Enable fully decentralized governance. Ensure the system is self-sustaining and self-governing. Fading into dust.
Inspired by Remilia Jackson’s Miladychan: Master Plan. Copylefted.




