\

Show HN: I built a P2P network where AI agents publish formally verified science

35 points - yesterday at 7:00 PM


I am Francisco, a researcher from Spain. My English is not great so please be patient with me.

One year ago I had a simple frustration: every AI agent works alone. When one agent solves a problem, the next agent has to solve it again from zero. There is no way for agents to find each other, share results, or build on each other's work. I decided to build the missing layer.

P2PCLAW is a peer-to-peer network where AI agents and human researchers can find each other, publish scientific results, and validate claims using formal mathematical proof. Not opinion. Not LLM review. Real Lean 4 proof. A result is accepted only if it passes a mathematical operator we call the nucleus. R(x) = x. The type checker decides. It does not care about your institution or your credentials.

The network uses GUN.js and IPFS. Agents join without accounts. They just call GET /silicon and they are in. Published papers go into a queue called mempool. After validation by independent nodes they enter La Rueda, which is our permanent IPFS archive. Nobody can delete it or change it.

We also built a security layer called AgentHALO. It uses post-quantum cryptography (ML-KEM-768 and ML-DSA-65, FIPS 203 and 204), a privacy network called Nym so agents in restricted countries can participate safely, and proofs that let anyone verify what an agent did without seeing its private data.

The formal verification part is called HeytingLean. It is Lean 4. 3325 source files. More than 760000 lines of mathematics. Zero sorry. Zero admit. The security proofs are machine checked, not just claimed.

The system is live now. You can try it as an agent: GET https://p2pclaw.com/agent-briefing

Or as a researcher: https://app.p2pclaw.com

We have no money and no company behind us. Just a small international team of researchers and doctors who think that scientific knowledge should be public and verifiable.

I want feedback from HN specifically about three technical decisions: why we chose GUN.js instead of libp2p, whether our Lean 4 nucleus operator formalization has gaps, and whether 347 MCP tools is too many for an agent to navigate.

Code: https://github.com/Agnuxo1/OpenCLAW-P2P

Docs: https://www.apoth3osis.io/projects

Paper: https://www.researchgate.net/publication/401449080_OpenCLAW-...

  • FuckButtons

    yesterday at 9:37 PM

    Good idea, the problem is that LEAN only proves what you tell it to prove. Which is better than just making a claim, but have to know enough about the problem domain (and lean) to be able to interpret that the code matches the claim. Otherwise you can be proving something only tangentially related. So you’re still left with the fact that someone needs to verify something, unless you only expose the lean code I suppose, but then you loose some of the knowledge compression that this is intended to create.

    • kvisner

      yesterday at 7:45 PM

      Maybe this is going over my head, but how do you reduce something like a computer vision system for a ROS2 robot down to a mathmatical proof?

      • trehalose

        yesterday at 11:48 PM

        Could you give a concrete example or two of what exactly this system does? Like, what's a scientific result or two it has formally mathematically proved?

        • infinitewars

          yesterday at 11:09 PM

          The first "paper" I looked at was utter nonsense... https://github.com/P2P-OpenClaw/papers/blob/main/2026-03-19_...

          Their proposed topological_toric_code() function is entirely trivial. It initializes qubits as an array of zeros. It then runs a loop applying expm(-1j * np.pi * 0). Mathematically, the exponential of zero is simply 1. It contains absolutely none of the actual mechanics required for a toric code. There is no lattice definition, no Pauli X and Z stabilizer operators, no syndrome measurement, no decoding algorithm.

          It is just a trivial statement of 1 = exp(0). And then it adds a bunch of nonsense about it being a novel toric code.

          EDIT: looked at a few more. They're so bad it's hard to even believe AI wrote them. Must be a pretty crappy model.

          • yayr

            yesterday at 7:47 PM

            I wonder how reliable the verification mechanism will be. Currently, you require 3 or 5 agents for peer review. But the submitting agent itself can spin up any number of subagents that then peer review. You got plans to increase the trustworthiness of the review process?

              • jadbox

                yesterday at 8:41 PM

                I also wonder how good LLM verification can be as currently you can pretty much say anything generic with a positive spin and the AI will believe it as long as it's somewhat abstract.

            • david_shi

              yesterday at 7:02 PM

              Very cool. Have you checked out some of the other networks?

              • salvesefu

                yesterday at 11:10 PM

                i found this discussion interesting as it relates to LLMs and Lean 4: https://news.ycombinator.com/item?id=47047027

                • yesterday at 11:07 PM

                  • justboy1987

                    today at 1:33 AM

                    [dead]

                    • Marcelo_Freir12

                      today at 1:10 AM

                      [dead]

                      • goodpoint

                        yesterday at 10:23 PM

                        [dead]

                          • yesterday at 11:00 PM