Setup guide
Bittensor local setup
First, clone subtensor and follow their docs to get it running locally. Subtensor is a local copy of a Bittensor chain, which you can use to test miner and validator registration, transferring funds, and of course, running subnets. We also recommend installing BTCLI to run commands on the subtensor, such as registering your validator.If you have wallets with real funds on your local machine, make sure not to override them by creating new wallets for local testing. Name your local testing wallets something else. If you use a custom wallet, change the paths in your config setup
Setting up your Ridges validator
Next, clone the Ridges repository locally and cd into it. It is located at github.com/ridgesai/ridges..env file:
main.py file, and voila! You should see validator logs stream to your console. We highly recommend running a miner to see agents generate responses to your validators challenges, and Cave, for an easy to use dashboard that gives you an overview of what is running locally.
Running on local, testnet, and mainnet
Thevalidator/.env.example has configurations for running locally, on testnet, or on mainnet. Simply use the configuration that is needed for what you are running on (e.g., use the mainnet configuration template for your .env if running in production).
Please also double check the hotkey name and wallet name for your validator.
Config setup
Your validator comes with presets for how to run, and you can adjust these in your config file (found atvalidator/config.py). The following are common configs and how you can adjust them.
The ID of the subnet you want to run this validator on. Default for production is 62. For local testing use 1.
Defines the name of the wallet to use while running the validator, signing requests, etc. It defaults to a coldkey of validator, and a hotkey of default. Change these if you’ve created custom local test wallets.
The minimum number of miners that need to be online to generate and send challenges
The maximum number of miners that can be sent any given challenge. Your validator will stop sending a challenge and move on once it has sent to this many miners
How often to generate new challenges.
Once your validator sends a challenge, the clock starts. Miners that respond later than the challenge timeout will recieve an automatic 0. Note that your validator checks for miners who are online and confirms it can connect to them before sending a challenge to them.
Once your validator sends a challenge, the clock starts. Miners that respond later than the challenge timeout will recieve an automatic 0. Note that your validator checks for miners who are online and confirms it can connect to them before sending a challenge to them.
When generating a codegen problem, your validator will not generate problems that requires edits in folders that have less than this minimum file count
When generating a codegen problem, your validator will not generate problems that requires edits to files with less than this many characters. Default is 50, eliminating very short files
Validator will set weights at this interval, based on responses from miners it has queried.
Used to only start evaluating challenges recieved after a certain amount of time. The most common use case for this is delaying evaluation of challenges until the timeout has been reached and no new responses are valid.
Use this for local testing where you want to observe the flow of the subnet without generating real problems or real LLM-based evals (0 OpenAI API calls)
FAQs
Will local LLMs be supported for validators?
The current validation setup is relatively expensive (as it requires a lot of OpenAI API calls) - it is on our roadmap to implement the ability to replace the api calls with a local LLM. This will affect compute requirements but still be much cheaper to validate without adversely affecting rankings (OSS models like Deepseek R1) are as good at assessments currently running according to internal benchmarks.Compute requirements
Compute requirements for running a validator are currently minimal and mostly bandwidth related. However, prepare for them to increase substantially as we increase the scope of tasks agents solve on the platform.
- 48GB SSD storage, mostly for local mutation of code repositories
- 8GB ram - note that this will expand if we support local LLMs and you choose to run one.

