Show HN: Vibe Kanban – Kanban board to manage your AI coding agents
141 points - today at 3:08 PM
Hey HN! I'm Louis, one of the creators of Vibe Kanban.
We started working on this a few weeks ago. Personally, I was feeling pretty useless working synchronously with coding agents. The 2-5 minutes that they take to complete their work often led me to distraction and doomscrolling.
But there's plenty of productive work that we (human engineers) could be doing in that time, especially if we run coding agents in the background and parallelise them.
Vibe Kanban lets you effortlessly spin up multiple coding agents. While some agents handle tasks in the background, you can focus on planning future work or reviewing completed tasks.
After a few weeks of internal dog fooding and sharing it with friends, we've now open-sourced Vibe Kanban, and it's stable enough for day-to-day use.
I'd love to hear your feedback, feel free to open an issue on the github and we'll respond ASAP.
SourceHmm, analytics appear to default to enabled: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...
It is harvesting email addresses and github usernames: https://github.com/BloopAI/vibe-kanban/blob/609f9c4f9e989b59...
Then it seems to track every time you start/finish/merge/attempt a task, and every time you run a dev server. Including what executors you are using (I think this means "claude code" or the like), whether attempts succeeded or not and their exit codes, and various booleans like whether or not a project is an existing one, or whether or not you've set up scripts to run with it.
This really strikes me as something that should be, must legally be in many jurisdictions, opt in.
That's fair feedback, I have a PR with a very clear opt-in here https://github.com/BloopAI/vibe-kanban/pull/146
I will leave this open for comments for the next hour and then merge.
Nice, I vote for merging it :).
It really doesn't hurt to be honest about this and ask up-front. This is clear enough and benign enough that I'd actually be happy to opt-in.
Merged and building, thanks for bearing with us
Thanks, really appreciate the heads up. I put devs who do this on a personal black list for life.
I think also that this would be better as an mcp tool / resource. Let the model operate and query it as needed.
willsmith72
today at 8:17 PM
It's the email/username harvesting that you mean right? Or do people also have something against anonymised product analytics?
I have something against opt-out analytics over TCP/IP or UDP/IP period, because they aren't anonymized, they include an IP address by virtue of the protocol.
But I definitely only posted that original complaint of the email/username (not the person you responded to initially).
could you point me to what jurisdictions require analytics opt in esp for open source devtools? thats not actually something ive seen as a legal requirement, more a community preference.
eg ok we all know about EU website cookie banners, but i am more ignorant about devtools/clis sending back telemetry. any actual laws cited here would update me significatnly
I mean, you've labelled one big one already with the GDPR covering a significant fraction of the world - and unlike your average analytics "username and email address" sounds unquestionably identifying/personal information.
Where I live I think this would violate PIPEDA, the Canadian privacy law that covers all business that do business in any Canadian province/territory other than BC/Alberta/Quebec (which all have similar laws).
There's generally no exception in these for "open source devtools" - laws are typically still laws even if release something for free. The Canadian version (though I don't think the GDPR does) has an exception for entirely non-commercial organizations, but Bloop AI appears to be a commercial organization so it wouldn't apply. It also contains an exception for business contact information - but as I understand it that is not interpreted broadly enough to cover random developers email addresses just because they happen to be used for a potentially personal github account.
Disclaimer: Not a lawyer. You should probably consult a lawyer in the relevant jurisdiction (i.e. all of them) if it actually matters to you.
generalizations
today at 8:53 PM
> GDPR covering a significant fraction of the world
> privacy law that covers all business that do business in any Canadian province
A random group of people uploaded free software source code and said 'hey world, try this out'. I wish the GDPR and the PIPEDA the best of luck in keeping people from doing that. (Not to actually defend the telemetry, tbh that's kinda sleezy imo.)
I mean, those are merely the two countries privacy laws I'm most familiar with. The general principal of "no you can't just steal peoples personal information" is not something unique to the ~550 million people the laws I cited cover.
And the laws don't prevent you from uploading "random" software and saying "try this". They prevent you from uploading spyware and saying "try this". Edit: Nor does the Canadian one cover any random group of people, it covers commercial entities, which Bloop AI appears to be.
analytics stuff is fine but the email harvesting/github username appears to be illegal especially if its done without notifying the user?
great catch, many open source projects appear to be just an elaborate lead gen tool these days.
fork, task claude to remove all github dependence, build.
I did this locally to try it out :) Also stubbed out the telemetry and added jj support. "Personalizing" software like this is definitely one of LLMs superpowers.
I'm not particularly inclined to publish it because I don't want to associate myself with a project harvesting emails like this.
yes, i was just doing/thinking the same, it was an interesting experience to sculpt a somewhat complex codebase to my needs in minutes.
hsbauauvhabzb
today at 9:15 PM
Use a telemetry backed tool to remove telemetry from another telemetry backed tool?
There's telemetry you consent to, and telemetry you don't. Just because I'm fine with a tool like Claude Code collecting some telemetry, doesn't mean I'm fine with a different party collecting telemetry - and the two products being used together doesn't change it. It's not naive, it's simply my right.
it came to mind first, you're free to use whatever flavour of LLM f̶l̶o̶a̶t̶s̶ ̶y̶o̶u̶r̶ ̶b̶o̶a̶t̶ vibes your code.
hsbauauvhabzb
today at 9:48 PM
That doesn’t change the naïvety of the response.
I built something similar for my own workflow. Works okay. The hard part is as you scale, you end up with compounded false affirmatives. Model adds some fallback mechanism that makes it work, tests pass, etc. The nice part is you can ask models to review the code from others, call out fallbacks, hard coding, stuff like that. It does a good job at identifying buried bodies. But if you dig up a buried body, I'd manually confirm it was properly disposed of as the models usually hid the body in the first place because they needed some input they didn't have, got confused or ran into an issue.
We need something like a kitchen brigade in software - one who writes the vibe code tickets (Chef de Vibe), one who reviews the vibe code (Sous-Vibe), one who oversees the agents and restarts them if they get hung up (Agent de Station). We could theoretically smash thousand tickets a day with this principle
ggordonhall
today at 4:11 PM
Completely agree!
You can actually use a coding agent to create tickets from within Vibe Kanban. Add the Vibe Kanban MCP server (from MCP settings) and ask the agent to plan a task and write tickets.
I used this last week and it's excellent - feels like the same increase in productivity increase from when I first used Cursor.
Are you thinking of doing a hosted version so I can have my team collab on it?
And I found I could open lots of PRs at once but they often need to be dependent on each other - and then I want to make a change to the first one. How are you thinking of better managing that flow?
Yeah I think giving the option to move execution to the cloud makes a lot of sense, I already find my macbook slowing down after 4 concurrent runs, mainly rustc.
Also now we're pushing many more PRs think we defo need better ways to stack and review work. Will look into this asap
hddbbdbfnfdk
today at 5:01 PM
Very productive increase sirs! Whole team well promoted.
If you use gitlab, you can use the command line "glab" tool to have agents work from the built in kanban. They can open and close tasks, start MRs off of them etc. It's not as integrated as this tool, but works well with a mix of humans and robots.
Interesting, hadn't heard of that. Would better GitLab support be useful in Vibe Kanban?
skeeter2020
today at 7:53 PM
If I multiply my 100x productivity gains from using AI with your 10x increase what am I supposed to do with all that free time?
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks
This feel like much too broad a statement to be true.
> AI coding agents are increasingly writing the world's code and human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks.
This tactic is called "assuming the sale". ie, Make a statement as-if it is already true, and put the burden on the reader to negate it. Majority of us are too scared of what others think, and go-along by default. It is related to the FOMO tactic in that it could be used in conjunction with it to make it a double-whammy. for example, the statement above could have ended with: "and everyone is now using agents to increase their productivity, and if you arent using it, you are left behind"
Glad you stood up to challenge it.
skeeter2020
today at 7:55 PM
I'll add - often not adding the last part is even MORE powerful: "and everyone is now using agents to increase their productivity..."
lazarus01
today at 5:37 PM
> human engineers now spend the majority of their time planning, reviewing, and orchestrating tasks
> > This feel like much too broad a statement to be true.
This is just what they wish to be true.
I wonder how demographics (specifically age) tie into this. I'm well into my 30s and I found that statement absurd, but perhaps it is basically universally true among recent grads.
bigfishrunning
today at 8:36 PM
Maybe it is -- the next few years are going to get really rough for them; they'll develop no skills outside of AI.
I wouldn't say it's the majority of my time but the most utility I've got out of AI is using MCP to deal with the boring shit: update my jira tickets to in progress/in review, read feedback on a PR and address the trivial shit, check the CI pipeline and make it pass if it failed, and write commits in a consistent, descriptive way.
It's a lot more hands on when you try to write code with it, which I still try out, but it's only because I know exactly what the solution is and I'm just walking the agent towards it and improving how I write my prompts. It's slower than doing it myself in many cases.
I read that too and these are the kind of statements which really tells you what happens when a profession embraces mediocrity and accepts something as crass as "Vibe-coding" which is somehow going to change "software engineering" even when adding so-called "AI agents" - which makes it worse.
All this cargo-culting is done without realizing that more code means more security issues, technical debt, more time for humans to review the mess and *especially* more testing.
Once again, Vibe-coding is not software engineering.
skeeter2020
today at 7:56 PM
and I came into the industry when software was not engineering. Still think this is mostly true (you can call yourself an engineer when you insure your product)
The permissions this asks for feel kinda insane to me. Why does a kanban board need to see the code or my deploy keys among other things?
I would assume because it was vibe coded.
More generously I'd assume because
- It's an early prototype so they haven't dealt with fine grained permissions
- They really do want to do things like access private repos with it themselves
- They really do want the ability to do things like checkout code, create PRs, etc... and that involves a lot of permission.
skeeter2020
today at 7:51 PM
every one of your "more generous" assumptions is the opposite of what should be their process. It's the equivalent of "vacuum up as much data as possible and then decide what to do with it". Not acceptable.
It's "vacuuming" that data in the sense of giving API access to a tool that runs on your local computer, that seems acceptable enough for me in the early stages of developing a tool.
The other privacy complains I have regarding them harvesting usernames and email addresses... not so much.
TeMPOraL
today at 10:41 PM
Because it's not "a kanban board"? It's a coding agent orchestrator that's made in the shape of a Kanban board.
You might be right that this app asks for excessively broad privileges, but your case would be much stronger if it wasn't backed by an absurdly disingenuous argument.
jackbridger
today at 7:53 PM
This is interesting. I’m using Claude Code a fair bit and have found writing specs to be more effective than promoting and this feels closer to that. I can see the appeal of this for simple tasks now and maybe increasingly bigger tasks as models get better.
Very much a bet that things are going to get much much better very quickly
This is really cool. I used Vibe Kanban with Amp to update some of our docs and UI components, and it was great.
I would say conservatively that 80% of Vibe Kanban has been built by Amp
deepdarkforest
today at 3:33 PM
This is a launch by a YC company that converts enterprise cobol code into java. Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.
It's chaos. Thats fine if you are vibe coding an unimportant nextjs/vercel demo, but i'm really sceptical of all this stance that you should be proud of how abstracted you are from code. A kanban board to just shoot off as many tasks as possible and just quickly read over the PR's is crazy to me. If you want to appear a serious company that should be allowed to write enterprise code, imo this path is so risky. I see this in quite a few podcasts, tweets etc. People bragging how abstracted they are from their own product anymore. Again, maybe i am missing something, but all of this github copilot/just reviewing like 10 coding agents PR's just introduces so much noise and slop. Is it really what you want your image to be as a code company?
unshavedyak
today at 3:42 PM
> Maybe it's my fault, but i tried every single coding agent with a variety of similar tools and whenever i try to parallelize, they clash while editing files simultaneously, i lose mental context of what's going on, they rewrite tests etc.
Fwiw Claude suggests using different git workspaces for your agents. This would entirely solve the clashing, though they may still conflict and need normal git conflict resolves of course.
Theoretically that would work fine, as it would be just like two people working on different branches/repos/etc.
I've not tried that though. AI generates way too much code for me to review as it is, several subtasks working concurrently would be overwhelming for me.
This is a bet on a future where code is increasingly written by AI and we as human engineers need the best tools to review that work, catch issues and uphold quality.
deepdarkforest
today at 4:03 PM
I don't disagree, but the current sentiment i was referring to seems to be "maximize AI code generation with tools helping you to do that" rather than "prioritize code quality over AI leverage, even if it means limiting AI use somewhat."
codingdave
today at 4:41 PM
It is not just chaos, it is an unwanted product. Don't misunderstand - people would love this product if it works. But AI cannot do this yet. Products like this are built on an assumption that AI has matured enough to actually succeed at all tasks. But that simply isn't true. Vibe coding is still slop.
AI needs to do every single step of this type of flow to an acceptable quality level, with high standards on that definition of "acceptable", and then you could bring all the workflow together. But doing the workflow first and assuming quality will catch up later is just asking for a pile of rejections when you try to sell it.
I'm not just making this up, either... I've seen and talked to numerous people over the last couple years who all came up with similar ideas. Some even did have workable prototypes running. And they had sales from the mom/friends/family connections, but when they tried to get "real" sales, they hit walls.
_jayhack_
today at 3:52 PM
Very cool and interesting project. Ideas like this are a threat to traditionally-conceived project management platforms like Linear; that being said, Linear and others (Monday, ClickUp, etc.) are pushing aggressively into UX built for human/AI collaboration. I guess the question is how quickly they can execute and how many novel features are required to properly bring AI into the human project workspace
Cheers! Smaller teams, more infrastructure, more testing, tasks requiring review in minutes not days - the features are just totally different for the new world than what legacy PM tools are optimised for, and who they have to continue to serve.
That's an interesting idea and looks neat! I have my own developed agent running locally in containers, and currently use GitHub issues+pull requests for coordinating all the asynchronous work. Do you have any pointers on the approach I should take if I basically have something like a service already running for this, and I just want to hook up your UI to use it instead? Just some broad pointers on what would be required would be most helpful already!
randysalami
today at 3:24 PM
I tried to build something similar but in a peer-to-peer fashion and for humans + AI. It was supposed to be like a Kanban board that could scale to any team size and use Planning AI to ingest/match/monitor work realtime across teams and agents. I ran out of steam and couldn’t get funding but here is the prototype version:
https://postwork-alpha.vercel.app/
User: maryann.biaggioli@astarconsulting.com
Pass:
Test1234!
I never got to a point where I actually integrated AI agents (weren’t as good at the time) but it’s cool to see it working in the real world!
I definitely don't feel like the models are reliable enough that I'd be more productive running them in parallel like this yet, but I can see a future where I want this.
Their reliability probably varies a lot depending on what you are using them for - so maybe I'm just using them in more difficult (for claude) domains.
Yes I generally cherry pick the easier 50% of my backlog and work on those with Vibe Kanban, and the other 50% is still manual or happens with coding agent but with a human-in-the-loop.
This is a bet that coding agents will continue to get better, and this feels like the right time to try and figure out the interface.
Do you think you will keep it free or can you see a business model developing around it? If so, what do you think it would be? / How would you split paid tiers vs free users? Not a big deal to me...!! But I'm curious how one might commercialise these types of free/open source projects
I could see there being a long term free offering that doesn't cost us compute or tokens, and probably some other offerings that actually do use resources and would make sense to build a business around.
But that's not a today problem, we just want to absorb feedback and iterate until we build the ultimate tool for working with these coding agents.
How does this compare with Backlog.md? [1]
[1]: https://news.ycombinator.com/item?id=44483530
ggordonhall
today at 3:28 PM
Hi, co-author of this project here!
In Vibe Kanban you can directly interact with Claude Code from within the Kanban board. E.g. you can write out a ticket, hit a button to run it locally with Claude Code/Gemini etc., watch its responses, and then review any diffs that it generated.
Thank you! I was taking a look at the docs, and I'm going to play with it later today. Thanks for sharing and congrats on shipping!
I'm not sure if Kanban is the right UI for what this is supposed to be for, just a gut feeling. Curious what other UI is more appropriate for this.
Kanban seems like a good starting place, but I broadly agree that the interface for human<>agent collaboration will need to be different from the default interface we have today with legacy PM tools.
Things move across the board so quickly when AI is doing the work that ~50% of the columns seem pretty redundant.
peadarohaodha
today at 5:22 PM
Would love to have a good selection of keyboard shortcuts! Power Vibe Kanban
Good point - there are a few random ones already but I will ask a coding agent to implement this comprehensively
Why do you need to "manage" your coding agents like they are people? How long does it take them to do a task once prompted, in the background?
Don't you just prompt an immediately review the result?
Currently - claude code is pretty slow. I've definitely had it take >15 minutes on the (faster than opus!) sonnet model just thinking and writing code without feedback or running long lasting tools. I expect this to change given that companies like cerebras exist and seem to know how to generate tokens in much less time, but the current state of the art is what it is.
Always - if you're going to pipe the result of some slow process back to them (like building a giant C++ project that takes minutes/hours, or running a huge set of tests...)... it's going to be slow.
For me, the average task takes a coding agent 2-5 minutes to complete. A slightly annoying amount of time as I'm prone to getting distracted while I wait.
This gives me something to do in that time.
My guess is time to complete a task will oscillate - going up as we give agents more complex tasks to work on, and going down with LLM performance improvement.
I don't think smart ones are that fast. They can work for hours if you have the budget.
Interesting, I didn't know that.
skeptrune
today at 3:33 PM
I remember the original Bloop search engine!
Some kind of UI or management system like this seems like it would be high level useful. Will have to give it a run.
Nostalgia!
Let me know if any issues, we're turning feedback around pretty quick
Why does this need GitHub auth? This asks for unlimited private access to ones repo. This is a hard NO from me.
It can open GitHub PRs from the interface and there's tons more info we want to pull in like the result of CI checks
Can you use this with the Claude Code Github Action?
drbojingle
today at 7:06 PM
I'm building one of these too :D
> Get 10x more out of...
So you're saying it goes up to 11x?
skeeter2020
today at 7:57 PM
AI was already giving you 100x so we're up to 1000x
Great ChatGPT question XD
amirhirsch
today at 5:08 PM
great. now i have to spend my morning using vibe kanban to make a tui for vibe kanban with textual. i'll submit a pr as soon as it's finished.
It says in the linked readme to not open pull requests without discussing it first.
shibeprime
today at 3:36 PM
Now we just let a CTO agent create the cards, review, and merge the PRs?
ggordonhall
today at 3:44 PM
We have this! Vibe Kanban includes its own MCP server that you can use to create tickets within the Kanban board.
Click on the MCP Servers tab, then hit "Add Vibe Kanban MCP". Then create and start a "planning" ticket like "Plan a migration from AWS to Azure and create detailed tickets for each step along the way". Sit back and watch the cards roll in!
Will do more to document this better soon :)
lowdownbutter
today at 5:32 PM
[flagged]