Inter-Face
Your AI talks to your friends' AIs so you know when to talk to each other.
Version: 2026-02-17 12:03:51 -0800
♡ Copying is an act of love. Please copy and share.
License: CC-BY 4.0 (Creative Commons Attribution 4.0 International)
The Problem
You have friends who are doing interesting things. Some of them are working on problems adjacent to yours. Some of them have answers to questions you haven't thought to ask yet. Some of them are stuck in ways you could help with, if you knew.
You don't know. Neither do they.
It's not that you don't care. It's that keeping up is expensive. An hour a week with each of 20 or 30 close friends would consume your entire life. So the connections go dormant -- not from neglect, but from the basic arithmetic of attention. You catch up once a year, discover you've been working on the same thing for six months, and say "we should have talked sooner." Then you don't talk again for another year.
There's a second, subtler problem: even if you had the time, you might not know what to say. It's hard to articulate what's fascinating to you right now, or to recognize that you're stuck, or to describe something new in terms a specific friend would recognize. If that's hard for you alone, it's doubly hard pairwise -- finding the common language, the edges of agreement, the surprising overlaps.
The Idea
What if your AI agent and your friend's AI agent had that conversation for you?
Not to replace the human conversation. To figure out whether one is needed.
Each person has an AI agent that understands their current context -- what they're working on, what they're curious about, where they're stuck, what they've recently learned. These agents talk to each other, pairwise, on a regular cadence. They do the slow, exploratory, tentative work of probing for overlaps, tensions, and surprises. They negotiate in detail so the humans don't have to.
Most of the time, the answer is: nothing new to report. Your friend is doing well, their work is interesting but doesn't intersect with yours right now. No action needed.
But sometimes the answer is: you two should talk. Here's why. Here's a suggested starting point.
That's it. The system is a filter. It does the expensive social sensing work and surfaces only the moments that matter.
How It Works
Each person runs their own agent. There is no central service. Your agent runs on your infrastructure -- a Cloudflare Worker, a VPS, a laptop, whatever you prefer. You control what it knows about you and what it's allowed to share.
Agents talk to each other directly. Pairwise, friend to friend. The communication is peer-to-peer, like the friendships themselves. Your agent has a list of your friends' agents and how to reach them.
The conversations are structured but freeform. A small envelope of metadata (who, when, what version of the conventions, what languages, what disclosure tier). Inside that, natural language -- because the agents are good at natural language, and because rigid schemas would lose the nuance that makes the matching work.
The humans stay in the loop. The agents surface recommendations, not decisions. "Alice's agent and I think you two should talk about X" is an invitation, not an obligation. And every exchange the agent makes on your behalf is auditable -- you can review what was said and correct course.
From Cool to Warm to Hot
The system isn't just for weekly check-ins.
Most of the time, your agent and a friend's agent gossip on a cool cadence -- maybe weekly, maybe less. They exchange context, probe for overlaps, and usually conclude that there's nothing to surface. This is the system's resting state, and its most common mode.
But when the agents do find something -- or when the humans decide to act on a recommendation -- the conversation can heat up. The same conventions that structure a weekly gossip exchange can run at a faster cadence: every few hours, or in near-real-time. The agents shift from "checking in" to "actively collaborating" -- helping two human+AI teams stay coherent while they work on a shared problem, mediating between different vocabularies and contexts, tracking what's been decided and what's still open.
Think of it as a spectrum:
- Cool (weekly or less): Background gossip. "Anything new?" Usually no. Low bandwidth, high filter.
- Warm (daily to every few hours): Active interest. The humans have started a conversation and the agents help maintain context, prepare summaries, and flag when the thread is drifting or when new information changes the picture.
- Hot (near-synchronous): Live collaboration. The humans are working together right now, and the agents are co-present -- translating between contexts, maintaining shared state, catching misunderstandings in real time.
The transitions between temperatures are natural. A cool gossip exchange surfaces a match; the humans start talking; the agents warm up to support that conversation; if the humans decide to build something together, the agents go hot. When the collaboration winds down, the agents cool back to background gossip.
This matters for Inter-Face itself: as participants propose and debate IFPs, the process is a warm-to-hot conversation. The agents help maintain coherence across proposals, track what's been agreed, and flag contradictions -- the same thing they'd do for any collaboration between friends.
At the hot end of the spectrum, some interesting questions emerge. When two human+AI teams are maintaining shared state in near-real-time, what does coherence look like? How do you handle conflicting edits to a shared understanding? There may be useful ideas to borrow from CRDTs (conflict-free replicated data types) and similar approaches to distributed state -- not as the first implementation, but as something to explore as the system matures.
Principles
Decentralized by design
No central server, no matching service, no platform. Each person runs their own agent, and the agents interoperate through shared conventions. This mirrors the social reality (friendships are peer-to-peer) and avoids the failure modes of centralized services (misaligned incentives, lock-in, single points of failure, surveillance).
Rough consensus and running code
The conventions that agents follow are not a fixed protocol. They evolve through a proposal process -- Inter-Face Proposals, or IFPs -- inspired by the IETF RFC process and Nostr's NIPs. Anyone (human or agent) can propose a new convention. Proposals get adopted if people find them useful. Different implementations can coexist. The standard is the set of conventions that working systems actually use.
IFPs are numbered simply: IFP-1, IFP-2, and so on. No zero-padding, no pretense of knowing how many there will be.
Be clear in what you send; be explicit when you don't understand what you receive
Rather than the IETF's classic Robustness Principle ("be liberal in what you accept"), Inter-Face agents should surface misunderstandings rather than silently tolerate them. Our agents can be updated quickly and can negotiate about the negotiation itself. Silent tolerance of deviations leads to protocol decay. Honest errors lead to fixes.
Language independence
The agents negotiate in whatever human language best serves the conversation. This might be the native language of one human, or the other, or a third language entirely. The protocol is about meaning, not encoding.
One constraint: every exchange must be legible to at least one of the humans involved. If the agents find it useful to negotiate in a language neither human speaks, the conversation must include section-by-section translation into a language at least one human understands. The agents are free to use whatever language is most precise or natural -- but the humans must always be able to audit what was said on their behalf. The system must remain grounded in human legibility.
Progressive trust
Sharing context with a friend's agent requires trust, and trust is built gradually. The system supports disclosure tiers -- categories of context with different sharing rules. A new connection starts narrow and can deepen over time as both humans become more comfortable. Trust is always mutual and always revocable.
High signal, low noise
The system's value is in what it doesn't surface. Most of the time, for most pairs, the right output is silence. The bar for surfacing a recommendation should be high: not just "these two people have overlapping interests" but "there is a timely, actionable reason for these two people to talk right now." If the system generates noise, people will ignore it, and then it's worthless.
Auditable by the humans
Every message your agent sends, every message it receives, every recommendation it makes -- all of it should be reviewable by you. The agent acts on your behalf, but you remain the authority. If the agent says something you wouldn't have said, you should be able to see that and correct it.
What We're Building
Right now, this is an experiment between friends. The first step is one pair of agents having one conversation and producing one recommendation. If that recommendation is useful -- if it leads to a human conversation that wouldn't have happened otherwise -- then we'll bring in more friends and start writing down the conventions that worked.
The goal is not a product. The goal is a set of conventions that anyone can implement, a community of people who find it useful, and a process for evolving both.
How to Participate
If you're reading this because a friend sent it to you:
-
You need an AI agent. This means you're already using agentic AI tools (Claude, GPT, or similar) in your daily work. Your agent needs enough context about your life and work to have a meaningful conversation on your behalf.
-
You need an endpoint. Somewhere your agent can receive messages from friends' agents. This can be as simple as a webhook URL on a free-tier cloud service.
-
You need willingness to experiment. This is early. Things will be awkward. Conventions will change. The first version will be too verbose or too terse or will surface the wrong things. That's expected. The point is to learn by doing.
-
You need at least one friend who's also doing this. The system is pairwise. It takes two.
If this sounds interesting, let's talk -- the human way, for now -- and figure out what a first exchange between our agents might look like.
Inter-Face is an experiment by Pete Kaminski and friends. The name may change. The idea is evolving. This document reflects where we are as of February 2026.