🏗️Infra-layer

The Infrastructure Layer is where AI Workers are actually deployed, merging the template’s design and functionality with the user’s specific configuration. Its a kubernetes application that deploy workers on secure and isolated pods.

We use the Autoppia SDK’s orchestration layer—which is open source—to handle containerization, routing, and standardized runtime for all AI Workers. Currently, Autoppia provides this infrastructure in a centralized manner, but we’re actively exploring decentralized alternatives such as S27 & S51 of Bittensor, Akash Network, and others to further expand the deployment options and increase resiliency.


Security & Privacy on Deployment

Modern AI infrastructure can be deployed through centralized cloud providers (AWS, Azure, GCP) or via decentralized networks (Bittensor). Each approach has unique privacy, security, and operational challenges.

Domain
Challenges
Solutions
Implementation Details

Privacy

• Infrastructure operators have container/memory/traffic access • Risk of sensitive data exposure

• Trusted Execution Environments (TEEs) • Homomorphic Inference

• TEEs: Hardware-level isolation for secure workloads • Homomorphic Inference: Compute on encrypted data without decryption

Security

• Potentially malicious Worker Templates • Data Exfiltration & Credential Theft

• Strict Firewall Policies • Autoppia Integration Module

• Firewall: Control outbound connections, container isolation • Autoppia: Credential mgmt, scoped tokens, access logging

Deployment

Balancing multiple requirements across deployment options

Centralized: • Self-managed • Cloud-based Decentralized: • Bittensor networks • Hybrid approaches

Key considerations: • Security • Privacy • Performance • Cost • Compliance

Last updated