// summary
CL4R1T4S provides a comprehensive repository of extracted system prompts, guidelines, and tools from major AI models and agents. The project aims to expose the hidden instructions that shape AI behavior, personas, and ethical frameworks. By documenting these unseen scaffolds, it enables users to better understand the underlying logic of the AI systems they interact with daily.
// technical analysis
CL4R1T4S is an open-source repository dedicated to the transparency and observability of AI systems by aggregating extracted system prompts, guidelines, and tool definitions from major AI models and agents. Its design philosophy centers on the belief that users cannot trust AI outputs without understanding the hidden 'prompt scaffolds' that dictate model behavior, ethics, and constraints. By exposing these underlying instructions, the project aims to demystify the 'shadow-puppet' nature of AI interactions, providing a community-driven resource for researchers and users to analyze how different labs shape model responses.
// key highlights
// use cases
// getting started
To begin using CL4R1T4S, browse the repository to explore the documented system prompts and guidelines for various AI models and agents. You can contribute to the project by submitting pull requests containing new model information, including the model name, version, and the date of extraction. For further engagement or to share findings, you can connect with the project maintainer via X or Discord.