Next-Gen Distributed Memory

Distributed Memory Fabric for AI Agents

A high-performance RAM sharing system written in Go. A distributed memory pool that scales across nodes, allowing agents to sync state with microsecond latency.

Start Building GitHub Repo

Engineered for Scale

Distributed RAM

Multiple devices donate local RAM to a shared cluster via gRPC. Accessible as a single unified memory pool.

Ultra-Low Latency

Microsecond-speed fan-out for high-frequency agent interaction, powered by Go and Protobuf serialization.

Semantic Memory

Distributed vector indexing with FAISS. Enable agents to retrieve long-term context across the entire fabric.

NVMe Spilling

Intelligent tiered memory. Automatically offload cold data to NVMe while keeping hot context in RAM.

SuperBrain Architecture

Real-time visualization of the Distributed Shared Memory Fabric

Application Layer Control Plane Data Plane (RAM Fabric) AI Agent A Python • CrewAI AI Agent B Node.js • LangChain Coordinator Hub Raft Consensus Metadata Cache Memory Node 1 /dev/shm (NVMe) Memory Node 2 Remote Replication Backup Cluster

Registry & Installation

Python SDK

pip install superbrain-sdk

Node.js SDK

npm install superbrain-distributed-sdk

Go Module

go get github.com/golightstep/superbrainSdk

Developer API

from superbrain import DistributedContextFabric fabric = DistributedContextFabric() ctx = fabric.attach_context("shared-v1") # Write & Read across nodes ctx.write("memory_ptr", "Agent State Data") val = ctx.read("memory_ptr")

See it in Action

Superbrain + Crew AI

Multi-agent state sharing at microsecond speeds across separate clusters.

Superbrain + Redis

Seamless memory symbiosis for low-latency state persistence.

Engineering Roadmap

Version Strategic Milestone Engineering Status
v0.1.0 Core Distributed RAM Engine Shipped
v1.0.0 Semantic Memory & FAISS Support Shipped
v1.5.0 Raft Consistency Evolution Shipped
v2.0.0 L1 Shared Memory Tiering Shipped