Hello, Zip!
ZipperGen is a Python DSL and runtime for structured multi-agent LLM coordination. You write a single global protocol describing what messages flow between which agents and who owns each decision. ZipperGen projects it onto each agent and runs them concurrently — with deadlock-freedom guaranteed by construction.
Quick start
git clone https://github.com/zippergen-io/zippergen.git
cd zippergen
pip install -e . Python 3.11 or later required.
Hello, World!
Define lifelines, write a global workflow, and call it like a normal Python function:
from zippergen.syntax import Lifeline, Var
from zippergen.actions import pure
from zippergen.builder import workflow
User = Lifeline("User")
Compute = Lifeline("Compute")
number = Var("number", int)
@pure
def inc(x: int) -> int:
return x + 1
@workflow
def increment(number: int @ User) -> int:
User(number) >> Compute(number)
Compute: number = inc(number)
Compute(number) >> User(number)
return number @ User
result = increment(number=1) # -> 2 Using real LLMs
Set your API key, pass a provider name, and run:
export OPENAI_API_KEY=...
my_workflow.configure(llms="openai", ui=True)
result = my_workflow(notes="...", diagnosis="...")
Built-in providers: openai, mistral, claude.
Different agents can use different providers:
my_workflow.configure(llms={"LLM1": "mistral", "LLM2": "openai"}) ZipperChat
Pass ui=True to open a live message-sequence-chart in your browser as the
agents run. Watch who sends what, who is thinking, and where the workflow currently is.