Skip to content

A Tavolata

“‘A tavolata bella non è quella col tavolo più grande — è quella con le persone giuste.” (A beautiful table setting is not the one with the biggest table — it is the one with the right people.)

In Napoli, a tavolata is the communal table — the long, joyful gathering where everyone eats together. As your distributed system grows, your Pasta Protocol kitchen faces the same question every Neapolitan grandmother faces before Sunday lunch: do you add more chairs, or do you buy a bigger table? Both strategies are valid. Both have limits. Neither forgives bad planning.

Due Filosofie di Scala

Aggiungi sedie al tavolo

Horizontal scaling: add more kitchen nodes to the cluster. Distribute the load across many small machines. More cooks, same kitchen size — each handles a portion of the work. Best for stateless workloads and read-heavy clusters.

Compra un tavolo più grande

Vertical scaling: give your existing nodes more CPU, memory, and I/O. One powerful machine handles more work per cycle. Best for consensus-heavy workloads and latency-sensitive sagas where inter-node coordination is the bottleneck.

Neither approach is universally superior. Most production Pasta Protocol deployments use both: a baseline of well-sized nodes (vertical) with the ability to add nodes horizontally during peak demand. The rest of this page explains how to do each.

Scalabilità Orizzontale — Più Sedie

Horizontal scaling in Pasta Protocol means adding new nodes to the cluster. The Pesto Consensus algorithm supports dynamic membership: you can add a node without restarting the cluster or interrupting service.

Aggiungere un Nodo

Terminal window
# 1. Start the new node process (it will be in JOINING state)
npx pasta node:start \
--name napoli-04 \
--kitchen primary-kitchen-eu-central \
--join-via napoli-01:7000
# 2. Wait for the node to catch up on the WAL replay
npx pasta node:await-ready --name napoli-04 --timeout 120s
# => napoli-04: WAL replay complete (12,441 operations)
# => napoli-04: status FOLLOWER — ready
# 3. Confirm cluster membership
npx pasta cluster:status
# => Nodes: 4/4 healthy | Leader: napoli-01 | Quorum: YES

Once the node is FOLLOWER-ready, the KitchenManager automatically routes a share of GarlicBreadcast consumer work and read requests to it. No configuration change required — the cluster self-balances.

Configurazione per Cluster Multi-Nodo

# .ricetta — horizontal scaling configuration
kitchen:
name: primary-kitchen-eu-central
nodes:
- host: napoli-01.internal
port: 7000
role: auto # auto = leader-eligible
- host: napoli-02.internal
port: 7000
role: auto
- host: napoli-03.internal
port: 7000
role: auto
- host: napoli-04.internal
port: 7000
role: auto
scaling:
auto_balance: true # rebalance GarlicBreadcast partitions on membership change
min_nodes: 3 # never shrink below this — quorum requires it
replication_factor: 3 # how many nodes hold a copy of each Dispensa shard

Partizioni GarlicBreadcast

When you add nodes, GarlicBreadcast partitions are rebalanced automatically. During rebalancing, message delivery is uninterrupted but partition assignment is temporarily in flux — consumers may receive messages from both old and new partition assignments. Ensure your consumers are idempotent:

import { GarlicBreadcast } from '@pasta-protocol/core';
const bus = GarlicBreadcast.getInstance();
// Idempotent consumer: safe to call multiple times with the same message
bus.subscribe('ordini', async (message) => {
const alreadyProcessed = await ordineRepository.exists(message.id);
if (alreadyProcessed) return; // idempotency guard
await ordineRepository.create(message.payload);
});

Scalabilità Verticale — Tavolo Più Grande

Vertical scaling means increasing the resources available to each existing node. In containerised deployments this is a matter of updating your resource requests; on bare metal it means migrating to more powerful hardware.

Risorse Consigliate per Nodo

Carico di LavoroCPUMemoriaStorage (Dispensa)Note
Development1 core512 MB1 GBSingle node, no HA
Small production2 cores2 GB20 GB3-node cluster
Medium production4 cores8 GB100 GB3–5 nodes
Large production8 cores16 GB500 GB5+ nodes with dedicated leader
Saga-heavy8+ cores32 GB200 GBLong-running sagas are memory-intensive
Analytics-heavy4 cores16 GB1+ TBLarge Dispensa with cold storage tier

Configurazione JVM-style Tuning

Pasta Protocol exposes several runtime tuning parameters that unlock additional throughput on larger machines:

# .ricetta — vertical scaling tuning
runtime:
max_heap_mb: 12288 # match to ~75% of available memory
gc_strategy: generational # options: generational, incremental, manual
consensus_thread_pool: 8 # threads for Pesto Consensus; set to CPU count
bus_consumer_threads: 16 # GarlicBreadcast consumer thread pool
dispensa_io_threads: 8 # Dispensa I/O thread pool
saga_parallelism: 32 # max concurrent in-flight sagas
performance:
enable_write_batching: true # batch small writes — major throughput gain
batch_window_ms: 5 # wait up to 5ms to batch writes
enable_read_cache: true
read_cache_size_mb: 2048 # in-memory read cache; set to ~20% of heap

Limiti di Scala e Confini del Tavolo

Auto-Scaling con Kubernetes

For Kubernetes deployments, Pasta Protocol ships a HorizontalPodAutoscaler manifest that scales based on pasta_kitchen_temperature_celsius and queue depth:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: pasta-kitchen-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: StatefulSet
name: pasta-kitchen
minReplicas: 3
maxReplicas: 9
metrics:
- type: External
external:
metric:
name: pasta_kitchen_temperature_celsius
selector:
matchLabels:
quantile: "0.99"
target:
type: AverageValue
averageValue: "800m" # scale up if P99 > 800ms
- type: External
external:
metric:
name: pasta_garlicbreadcast_queue_depth
target:
type: AverageValue
averageValue: "500" # scale up if avg queue depth > 500 messages

Note that because Pasta Protocol uses stateful consensus, Kubernetes scaling events trigger a controlled join/leave procedure — not a simple pod restart. The pasta-protocol-k8s Helm chart handles this automatically via lifecycle hooks.

Strategia di Scala Consigliata

For teams growing their Pasta Protocol deployment, the recommended evolution path is:

  1. Start with 3 nodes, well-sized — resist the urge to scale horizontally before you have understood your workload profile.
  2. Monitor pasta_kitchen_temperature_celsius — if P99 latency climbs under load, profile before scaling.
  3. Scale vertically first — adding memory and CPU to existing nodes has no coordination cost.
  4. Add nodes when vertical ceiling is reached — go from 3 to 5 nodes, not from 3 to 9.
  5. Introduce write sharding only when 5-node vertical headroom is exhausted — it adds operational complexity; do not reach for it prematurely.

‘A tavolata cresce quando ce vò — ma ogni sedia nova porta ‘o so peso. (The table grows when it must — but every new chair brings its own weight.)