<aside>
</aside>
<aside> ⌛
Launching March 25th, 2PM EST on Sonic.
</aside>
<aside> 📢
Abyss is the first 1-of-1 trading card game (TCG) where every player embarks on a personalized and ever-evolving journey. Each participant is assigned a unique character name and clan linked to their wallet address, making their experience one-of-a-kind. The game blends digital collectibles, storytelling and AI innovation to make a truly immersive experience.
Built on Sonic, Abyss leverages cutting edge generative AI to craft trading cards which have unique environments, characters and narratives within them. This ensures that no two players share the same story or card collection, setting a new standard for trading card games and interactive storytelling- both on chain and off.
Phase One of Abyss involves creating your character, minting 1 of 1 Abyss cards every 24hrs, gathering Aura points and experiencing the story.
Each Chapter contains a fixed supply of 3,200 card mints.
Character creation is closed at the beginning of each epoch and will only reopen after 12 hours if the Chapter does not mint out—giving new travellers a chance to enter the Abyss.
Phase Two brings the implementation of actions with the cards that you collect during the Genesis phase.
This includes a key feature that we call a dominant trait. Each Character has a dominant trait that is shaped by the choices you make along your journey in the Abyss. This trait will play a big part in Phase Two.
Phase Two gives utility for the cards and mechanisms to reward our users. Our points system, Aura, will always remain the biggest priority. Those with Aura in the Abyss have power.
</aside>
<aside>
Abyss is the world's first multi-modal choose-your-own-adventure storytelling AI agent with an LLM & Diffusion agent working in harmony. The architecture is a combination of both synchronous and asynchronous design. LLM inference is served synchronously and outputs are able to deliver rapidly and at scale. This data is also text based, allowing caching to store and embed on-chain. As for the diffusion agent, it is a combination of both a templating and a stable diffusion LoRA model, combining the structured output of the LLM agent to generate image content.
This output has direct correlation to the blockchain, so it is only initiated via the mint function, which is indexed and processed off-chain, thus making an asynchronous design the best fit for the application. For the model training and inference: The LLM agent uses instruction sets to handle separate tasks. Allowing us to reduce the token requirements of each request, and therefore improving generation latency to the client.
The LLM agent instruction set provides various excerpts which are implemented based on input parameters provided by the client, removing the possibility of manipulation or 'jailbreaking' of the model. This means that the LLM agent is atomic in nature. The Diffusion agent uses a text to image based LoRA diffusion workflow. The LoRA has been trained on assets created by our in-house artist, coupled with the generation of the base stable diffusion model, this allows us to generate our content as required.
The relationship between these two models is an interface where the LLM provides two parameters to the diffusion model; a 'prompt' in order to shape the intended output style and scene of the image content, and a 'query' parameter where the agent acts as a RAG to search for images and provide the diffusion model with a base layer for the model to populate.
</aside>