|

Inference at the edge of ambition.

Nine nodes. Eleven GPUs. Five architecture generations. Free AI compute for the communities that need it most.

00/

Data Hub

Aggregate metrics across the ECO-Foundry local cluster.

0PFLOPS
Peak AI Compute
All GPUs, Best Precision
0GB
GPU Memory
Combined VRAM + Unified
0GB
System RAM
9 Nodes
0
GPUs Active
5 Architectures
0
CUDA + GPU Cores
NVIDIA + Apple Silicon
0+
Tensor Cores
Gen 2-5 + Neural Engine
~0TF
FP32 Compute
Combined
0TB/s
Bandwidth
Aggregate
0Gen
Architectures
Pascal → Apple Silicon
00.1/

Cluster Architecture

Nine local nodes connected via Cat 8 Ethernet with elastic burst capacity on the cloud.

Exo-Scale
Cloud Tier
Cloud Tier

H100, H200, A100, B200, GB300

Elastic burst capacity

Multi-arch
1 Gbps Fiber ↕
ECO-Foundry
Cat 8 · 40 Gbps · eero Max 7

Blackwell

Feynman

DGX Spark · GB10 Grace Blackwell

128 GB Unified · 1 PFLOPS

Blackwell + Grace

Ampere

Laplace

RTX 3080 Ti

12 GB GDDR6X

Ampere GA102
Euler

RTX 3080 Ti

12 GB GDDR6X

Ampere GA102
Riemann

RTX 3080 Ti

12 GB GDDR6X

Ampere GA102

Turing

Dirac

RTX 2080 Ti

11 GB GDDR6

Turing TU102

Pascal

Noether

2x TITAN Xp

24 GB GDDR5X

Pascal GP102
Gauss

GTX 1080

8 GB GDDR5X

Pascal GP104
Planck

GTX 1080

8 GB GDDR5X

Pascal GP104

Apple Silicon

Turing

M4 Max 40-core

64 GB Unified

Apple Silicon 3nm
Nodes9
GPUs11
VRAM279 GB
GPU Cores52,992
Peak AI1.5+ PFLOPS
01/

ECO-Foundry

Energy-conscious, always-on compute serving local communities. Nine nodes spanning five architecture generations, open for community access.

Flagship. Grace Blackwell

Feynman

NVIDIA DGX Spark · Grace Blackwell

What I cannot create, I do not understand.

Feynman, NVIDIA DGX Spark · Grace Blackwell
SoC
GB10 Grace Blackwell
Architecture
Blackwell + Grace
Process
4nm
CPU
20-core ARM
Memory
128 GB LPDDR5x
Bandwidth
273 GB/s
Storage
4 TB NVMe
Interconnect
200 Gbps
TDP
240W
FP4 Sparse
1000 TFLOPS
Peak
FP4 Dense
~500 TFLOPS
FP8
~208 TFLOPS
Measured
BF16
~100 TFLOPS
Measured

NVFP4 · FP8 · INT8 · BF16 · FP16 · TF32 · FP32

Laplace

NVIDIA GeForce RTX 3080 Ti

The weight of evidence for an extraordinary claim must be proportioned to its strangeness.

Laplace, NVIDIA GeForce RTX 3080 Ti
RTX 3080 Ti · 12 GB GDDR6X
GPU
RTX 3080 Ti
Architecture
Ampere GA102
Process
8nm
CPU
i9-13900K 24C/32T
System RAM
96 GB DDR5-5600
VRAM
12 GB GDDR6X
CUDA Cores
10,240
Tensor Cores
320 (3rd-gen)
RT Cores
80
Bandwidth
912 GB/s
Storage
4 TB NVMe
GPU TDP
350W
FP32
34.1 TFLOPS
FP16 Tensor Sparse
136.4 TFLOPS

Euler

NVIDIA GeForce RTX 3080 Ti

Read Euler, read Euler, he is the master of us all.

Euler, NVIDIA GeForce RTX 3080 Ti
RTX 3080 Ti · 12 GB GDDR6X
GPU
RTX 3080 Ti
Architecture
Ampere GA102
Process
8nm
CPU
i9-13900K 24C/32T
System RAM
96 GB DDR5-5600
VRAM
12 GB GDDR6X
CUDA Cores
10,240
Tensor Cores
320 (3rd-gen)
RT Cores
80
Bandwidth
912 GB/s
Storage
4 TB NVMe
GPU TDP
350W
FP32
34.1 TFLOPS
FP16 Tensor Sparse
136.4 TFLOPS

Riemann

NVIDIA GeForce RTX 3080 Ti

If only I had the theorems! Then I should find the proofs easily enough.

Riemann, NVIDIA GeForce RTX 3080 Ti
RTX 3080 Ti · 12 GB GDDR6X
GPU
RTX 3080 Ti
Architecture
Ampere GA102
Process
8nm
CPU
i9-13900K 24C/32T
System RAM
96 GB DDR5-5600
VRAM
12 GB GDDR6X
CUDA Cores
10,240
Tensor Cores
320 (3rd-gen)
RT Cores
80
Bandwidth
912 GB/s
Storage
4 TB NVMe
GPU TDP
350W
FP32
34.1 TFLOPS
FP16 Tensor Sparse
136.4 TFLOPS

Noether

Dual NVIDIA TITAN Xp

My methods are really methods of working and thinking; this is why they have crept in everywhere anonymously.

Noether, Dual NVIDIA TITAN Xp
TITAN Xp × 2 · 2x 12 GB GDDR5X (24 GB total)
GPU
2x TITAN Xp
Architecture
Pascal GP102
Process
16nm
CPU
i7-7700K 4C/8T
System RAM
64 GB DDR4-2133
VRAM
2x 12 GB GDDR5X (24 GB)
CUDA Cores
7,680 (combined)
Tensor Cores
None
Bandwidth
1,096 GB/s (combined)
TDP
500W (combined)
FP32
24.3 TFLOPS
Combined

Dirac

NVIDIA GeForce RTX 2080 Ti

The aim of science is to make difficult things understandable in a simpler way.

Dirac, NVIDIA GeForce RTX 2080 Ti
RTX 2080 Ti · 11 GB GDDR6
GPU
RTX 2080 Ti
Architecture
Turing TU102
Process
12nm
CPU
i7-9700K 8C/8T
System RAM
64 GB DDR4-2666
VRAM
11 GB GDDR6
CUDA Cores
4,352
Tensor Cores
544 (2nd-gen)
RT Cores
68
Bandwidth
616 GB/s
GPU TDP
260W
FP32
13.4 TFLOPS
FP16 Tensor Sparse
~107.6 TFLOPS

Gauss

NVIDIA GeForce GTX 1080

Mathematics is the queen of the sciences, and number theory is the queen of mathematics.

Gauss, NVIDIA GeForce GTX 1080
GTX 1080 · 8 GB GDDR5X
GPU
GTX 1080
Architecture
Pascal GP104
Process
16nm
CPU
i7-6700K 4C/8T
System RAM
64 GB DDR4-2133
VRAM
8 GB GDDR5X
CUDA Cores
2,560
Tensor Cores
None
Bandwidth
320 GB/s
TDP
215W
FP32
8.87 TFLOPS

Planck

NVIDIA GeForce GTX 1080

Science cannot solve the ultimate mystery of nature. And that is because we ourselves are a part of the mystery.

Planck, NVIDIA GeForce GTX 1080
GTX 1080 · 8 GB GDDR5X
GPU
GTX 1080
Architecture
Pascal GP104
Process
16nm
CPU
i7-6700K 4C/8T
System RAM
64 GB DDR4-2133
VRAM
8 GB GDDR5X
CUDA Cores
2,560
Tensor Cores
None
Bandwidth
320 GB/s
TDP
180W
FP32
8.87 TFLOPS

Turing

Apple M4 Max · MacBook Pro

The sciences do not try to explain, they hardly even try to interpret, they mainly make models.

GPU
M4 Max 40-core
Architecture
Apple Silicon (3nm)
Process
3nm (2nd gen)
CPU
16-core (12P + 4E)
Unified Memory
64 GB LPDDR5x
GPU Shaders
5,120
Memory BW
546 GB/s
Neural Engine
16-core
TDP
75W
FP32
18.4 TFLOPS
02/

Exo-Scale

Elastic GPU capacity via the cloud. When community projects need more than the local cluster can provide, we scale to the cloud.

Elastic. Cloud

GPU Cloud

The Superintelligence Cloud

Provider

Cloud Partner

GPU Fleet

H100, H200, A100, B200, GB300

Instances

1x to 8x GPU

Clusters

16 to 2,000+ GPUs

Software

PyTorch, CUDA, cuDNN

Status

Active

03/

Network

Interconnect topology across the ECO-Foundry and affiliated networks.

Cat 8 Ethernet

40 Gbps rated

S/FTP, 7 Gbps effective

eero Max 7

WiFi 7

Tri-band, 4.3 Gbps

Fiber Uplink

1 Gbps symmetric

UCLA eduroam

WPA2-Enterprise

Internet2, 802.1x

DGX Spark

200 Gbps

2x QSFP

LADWP Power

LADWP

1.75 kW peak GPU draw

04/

Built for Impact

Purpose-built infrastructure powering free AI tools for local communities.

01/

Community Inference

Free inference for community partners. Models up to 200B parameters on Feynman. 128 GB unified memory, 1.5+ PFLOPS cluster peak. No API limits, no rate caps, no barriers.

02/

Applied AI Deployment

From climate models to educational tools. Local fine-tuning on the ECO-Foundry, cloud burst via Exo-Scale for larger workloads. We help partners go from idea to deployed model.

03/

Open Compute

52,992 GPU cores across nine nodes available to nonprofits, educators, and community developers. Always available. No cloud dependency.