Skip to content

Multi-Arbiter Implementation Design Document

Overview

This document outlines the design for implementing multiple arbiters (1-10) on a single Chainlink node, with automatic key management and job creation.

Requirements

Functional Requirements

  1. User Input: Allow users to specify 1-10 arbiters for their Chainlink node
  2. Multiple Jobs: Create unique Chainlink jobs for each arbiter
  3. Key Management:
  4. Create 1 key for every 2 arbiters (jobs)
  5. Dynamic key creation based on job count: ceil(job_count / 2) keys
  6. Assignment pattern: Job1&2→Key1, Job3&4→Key2, Job5&6→Key3, etc.
  7. Job Template Updates: Migrate from current format to new format with fromAddress
  8. Automated Creation: Use existing create-chainlink-job.sh for each job
  9. Configuration Storage: Store all job IDs and key addresses for future reference

Non-Functional Requirements

  1. Backward Compatibility: Existing single-arbiter installations should continue working
  2. Error Handling: Graceful handling of key creation and job creation failures
  3. Idempotency: Script should be re-runnable without creating duplicates
  4. Performance: Efficient creation of multiple jobs

Current vs New Job Specification Format

Current Format (basicJobSpec)

type = "directrequest"
schemaVersion = 1
name = "Verdikta AI Evaluation"
contractAddress = "0xD67D6508D4E5611cd6a463Dd0969Fa153Be91101"
# No fromAddress field
# No explicit externalJobID
# justificationCID (uppercase)
# fulfillOracleRequest3
# gasLimit="1500000"

New Format (Multi-Arbiter Template)

type = "directrequest"
schemaVersion = 1
name = "{JOB_NAME}"
fromAddress = "{FROM_ADDRESS}"
contractAddress = "{CONTRACT_ADDRESS}"
# externalJobID auto-generated by Chainlink
# justificationCid (lowercase)
# fulfillOracleRequestV
# gasLimit="2500000"

Architecture Design

Key Components

  1. Multi-Arbiter Configuration Script (configure-multi-arbiters.sh)
  2. New script for multi-arbiter setup
  3. Prompts user for number of arbiters
  4. Manages key creation and assignment
  5. Orchestrates job creation

  6. Enhanced Job Template (basicJobSpec)

  7. Updated existing template to new format (breaking change)
  8. Parameterized for variable substitution
  9. Supports fromAddress field

  10. Key Management Module

  11. Functions for checking existing keys
  12. Creating additional keys when needed
  13. Assigning keys to jobs based on rules

  14. Job Creation Loop

  15. Iterates through requested number of arbiters
  16. Customizes job spec for each arbiter
  17. Calls create-chainlink-job.sh for each job

Data Flow

User Input (1-10) 
Key Management Check
Create Additional Keys (if needed)
For Each Arbiter (1 to N):
Generate Job Spec
Call create-chainlink-job.sh
Store Job ID
Update Configuration Files

Implementation Plan

Phase 1: Template and Key Management

1.1 Update Job Template

  • Update existing basicJobSpec to new format with parameterized fields
  • Include new fromAddress field (externalJobID is auto-generated by Chainlink)
  • Update ABI to fulfillOracleRequestV, gas limit to 2500000, justificationCid (lowercase)
  • Use placeholder tokens: {JOB_NAME}, {FROM_ADDRESS}, {CONTRACT_ADDRESS}

1.2 Key Management Functions

  • check_existing_keys(): List current Chainlink keys
  • create_additional_key(): Create new key for jobs 6-10
  • assign_key_to_job(): Return appropriate key address for job number

1.3 Enhanced Configuration Storage

  • Extend .contracts file format to store multiple job IDs
  • Add .chainlink_keys file for key management
  • Store key-to-job assignments

Phase 2: Multi-Arbiter Script

2.1 User Interface

  • Prompt for number of arbiters (1-10)
  • Validate input range
  • Confirm configuration before proceeding

2.2 Job Creation Loop

  • Generate unique job names (e.g., "Verdikta AI Arbiter 1", "Verdikta AI Arbiter 2")
  • Assign appropriate fromAddress based on job number (1 key per 2 jobs)
  • Substitute template variables in job spec for each job
  • Create temporary job spec file for each job
  • Call create-chainlink-job.sh for each job (externalJobID auto-generated)

2.3 Configuration Management

  • Store all job IDs in structured format
  • Update .contracts file with job array
  • Maintain backward compatibility with single job format

Phase 3: Integration and Migration

3.1 Integration with Existing Scripts

  • Update configure-node.sh to replace single-job flow with multi-arbiter flow
  • Update other scripts that reference job IDs to handle arrays
  • Maintain legacy variables for backward compatibility

3.2 Template and Configuration Updates

  • Update all scripts that use the job template
  • Ensure configuration file readers handle new format
  • Update documentation and examples

File Structure Changes

New Files

installer/bin/key-management.sh              # Key management functions
installer/docs/MULTI_ARBITER_DESIGN.md       # This design document

Modified Files

installer/bin/configure-node.sh              # Updated for multi-arbiter support
installer/.contracts                         # Extended format for multiple jobs
chainlink-node/basicJobSpec                  # Updated to new parameterized format

Configuration File Formats

Extended .contracts Format

# Existing contract information
OPERATOR_ADDR="0x..."
NODE_ADDRESS="0x..."
LINK_TOKEN_ADDRESS_BASE_SEPOLIA="0x..."

# Multi-arbiter configuration
ARBITER_COUNT="8"
JOB_ID_1="uuid-1"
JOB_ID_2="uuid-2"
JOB_ID_3="uuid-3"
JOB_ID_4="uuid-4"
JOB_ID_5="uuid-5"
JOB_ID_6="uuid-6"
JOB_ID_7="uuid-7"
JOB_ID_8="uuid-8"
JOB_ID_NO_HYPHENS_1="uuid1"
JOB_ID_NO_HYPHENS_2="uuid2"
# ... etc for all jobs

# Key assignments (1 key per 2 jobs)
KEY_1_ADDRESS="0x..."  # Jobs 1-2
KEY_2_ADDRESS="0x..."  # Jobs 3-4
KEY_3_ADDRESS="0x..."  # Jobs 5-6
KEY_4_ADDRESS="0x..."  # Jobs 7-8
KEY_COUNT="4"

# Legacy single job support (points to first job)
JOB_ID="$JOB_ID_1"
JOB_ID_NO_HYPHENS="$JOB_ID_NO_HYPHENS_1"

Key Management

# Login to Chainlink node
docker exec -it chainlink admin login

# List existing keys
docker exec -it chainlink keys eth list

# Create new key for chain ID 84532
docker exec -it chainlink keys eth create --evm-chain-id 84532

Key Assignment Strategy

  • Jobs 1-2: Use KEY_1_ADDRESS
  • Jobs 3-4: Use KEY_2_ADDRESS
  • Jobs 5-6: Use KEY_3_ADDRESS
  • Jobs 7-8: Use KEY_4_ADDRESS
  • Jobs 9-10: Use KEY_5_ADDRESS
  • Pattern: key_index = ceil(job_number / 2)

Error Handling Strategy

Key Creation Failures

  • Retry key creation up to 3 times
  • Fallback to using single key for all jobs
  • Clear error messages for manual intervention

Job Creation Failures

  • Continue creating remaining jobs if one fails
  • Collect all failures and report at end
  • Provide option to retry failed jobs

Validation Checks

  • Verify Chainlink node is accessible
  • Confirm sufficient keys exist before job creation
  • Validate job spec template before processing

Testing Strategy

Unit Tests

  • Key management functions
  • Job spec template generation
  • Configuration file handling

Integration Tests

  • End-to-end multi-arbiter creation
  • Single arbiter backward compatibility
  • Migration from single to multi-arbiter

Test Scenarios

  1. Single Arbiter: Test with 1 arbiter (1 key)
  2. Two Arbiters: Test with 2 arbiters (1 key shared)
  3. Four Arbiters: Test with 4 arbiters (2 keys, 2 jobs each)
  4. Eight Arbiters: Test with 8 arbiters (4 keys, 2 jobs each)
  5. Ten Arbiters: Test maximum arbiters (5 keys, last key has 2 jobs)
  6. Failed Job Creation: Test error handling and partial failure
  7. Duplicate Run: Test idempotency

Security Considerations

Key Management

  • Store key addresses only, not private keys
  • Use Chainlink's built-in key management
  • Secure transmission of credentials

Job Isolation

  • Each job has unique external ID
  • Proper key assignment prevents cross-job interference
  • Individual job failure doesn't affect others

Performance Considerations

Parallel Job Creation

  • Consider parallel execution for multiple jobs
  • Balance API rate limits with speed
  • Monitor Chainlink node resource usage

Resource Planning

  • Estimate gas costs for multiple jobs
  • Plan for increased node resource usage
  • Consider database storage for multiple jobs

Implementation Notes

Breaking Change Approach

Since there are no existing users, we can implement breaking changes: 1. Update existing basicJobSpec template to new format 2. Replace single-job flow entirely with multi-arbiter flow 3. No migration or backward compatibility concerns 4. Clean, modern implementation without legacy baggage

Future Enhancements

Dynamic Scaling

  • Add/remove arbiters without full reconfiguration
  • Hot-swapping of failed arbiters
  • Load balancing across arbiters

Monitoring Integration

  • Health checks for individual arbiters
  • Performance metrics per arbiter
  • Automated failure detection and recovery

Advanced Key Management

  • Key rotation support
  • Multiple chains support
  • Hardware security module integration

Confirmed Requirements

Based on feedback, the following decisions have been made:

  1. Job Naming: "Verdikta AI Arbiter 1", "Verdikta AI Arbiter 2", etc. ✅
  2. External Job IDs: Auto-generated by Chainlink (not in template) ✅
  3. Template Migration: Update existing basicJobSpec (breaking change OK) ✅
  4. Key Management: 1 key per 2 jobs (ceil(job_count / 2) total keys) ✅
  5. Configuration Storage: Extend existing .contracts file ✅
  6. User Interface: Replace single-job flow entirely ✅
  7. Migration: Not needed (no existing users) ✅

Implementation Timeline

Week 1: Foundation

  • Create new job template with parameterization
  • Implement key management functions
  • Design configuration file formats

Week 2: Core Implementation

  • Develop multi-arbiter script
  • Implement job creation loop
  • Add error handling and validation

Week 3: Integration

  • Integrate with existing scripts
  • Add migration support
  • Comprehensive testing

Week 4: Documentation and Polish

  • Update documentation
  • Add examples and troubleshooting
  • Final testing and bug fixes

This design provides a solid foundation for implementing the multi-arbiter feature while maintaining backward compatibility and following best practices for scalability and maintainability.