Show HN: I built a more productive way to manage AI chats
154 points - last Friday at 8:46 PM
Sourcenpollock
yesterday at 2:45 AM
Here's how I would market this:
Create "packages" of context for popular API & Libraries - make them freely available via public url
Keep the packages up to date. They'll be newer than the cutoff date for many models, and they'll be cleaner then the data a model slurps in using web search.
Voila, you're now the trusted repository of context for the developer community.
You'll get a lot of inbound traffic/leads, and ideas for tangential use cases with commercial potential.
ar-jan
yesterday at 12:08 PM
https://context7.com/ is just that, in the form of an MCP server.
tapeo
yesterday at 12:18 PM
You can't start fresh chats with updated context and you still need to create multiple chat in your preferred chat service and copy-paste data. But this is SO good to use with development environment through docs or MCP! Thanks for sharing
tapeo
yesterday at 6:42 AM
I love this, it can be very useful to have ready to use library of context data and at the same time a perfect solution to bring in new users. Thanks so much.
hoerzu
yesterday at 6:39 AM
To be free and not forward credentials I built an alternative to all-in-one chatting without auth and access to search API through the web:
https://llmcouncil.github.io/llmcouncil/
Provides simple interface to chat with Gemini, Claude, Grok, Openai, deepseek in parallel
lagniappe
yesterday at 3:43 AM
Of all the suggestions here, this is the one.
deepdarkforest
yesterday at 7:59 AM
wouldn't that be just 3rd party llms.txt?
wewewedxfgdf
last Friday at 9:59 PM
I thought about building something along these lines (not the same but vaguely similar).
Then Gemini AI Studio came along with a 1 million token window and allowed me to upload zip files of my entire code base and I lost interest in my own thing.
douglasisshiny
yesterday at 11:04 AM
Is it not a bit weird to freely give give away your entire code base (I assume it's personal, not your company's, but maybe I'm wrong) to an entity like Google?
myflash13
yesterday at 1:46 PM
As a business owner that uses Cursor, this is a real risk that I worry about (third parties stealing my code). However, the massive productivity benefit of having access to AI tools far outweighs the risk of them copying my business based on the code alone. Besides, AI is making code less and less valuable. My code is not the moat -- the hard part is the network, traction, brand, distributions, etc.
sampullman
yesterday at 11:55 AM
How common is it to have a personal project that isn't open source? Probably more common than I think, but it seems like a foreign concept to me.
Either my code isn't commercialized so I don't mind "giving" it away, or it is commercialized but wouldn't be safe from a clean room implementation anyway. Isn't that what bigco would do of they really wanted to steal your idea?
tapeo
last Friday at 10:09 PM
Yes the long context it's complementary, in other chat services like Gemini you have to rewrite that base context everytime for each new fresh chat, plus they lack of specific data import tools and projects management
TZubiri
last Friday at 10:09 PM
It technically handles 1M tokens, but if you ask it questions it's obvious that it's too much to handle.
Just upload a novel and ask it questions, you'll see how it botches simple stuff
icelancer
last Friday at 10:35 PM
Easiest way to prove it can't handle the full context in reality is to upload a one hour documentary movie with audio and ask it to write timestamps of chapters/critical moments. It can't handle this beyond 10 minutes even remotely reliably.
wewewedxfgdf
last Friday at 10:12 PM
Good or bad it's a zillion times better than Claude or ChatGPT where you can't even upload a zipfile.
zwaps
yesterday at 5:34 AM
If you want this in numbers check the nolima benchmark
golfer
yesterday at 1:46 PM
Seems like this needs to be updated. Lots of newer models not on their list.
jeswin
yesterday at 4:50 AM
An agentic flow can solve this within an existing UI/app; I already use such a workflow when I have to bring in project documentation. That will be your competition.
Since it's a commercial product and feedback can be useful: people would generally be hesitant to leave their existing apps if there's a workaround. There's a certain stickiness to them, even ChatGPT. Personally I use self-hosted LibreChat, and the history and additional features it provides are important to me.
tapeo
yesterday at 6:38 AM
I appreciate the feedback!
Yes I will work on make the context management more productive with a ready to use service and with abilities to switch from other services easily.
owebmaster
yesterday at 12:52 PM
There is a huge market available not using any App yet.
jmcmaster
last Friday at 11:19 PM
How are you handling privacy / security / confidentiality if I upload all this data? No way I could use this for work.
tapeo
last Friday at 11:34 PM
Yes actually that it's not a trivial topic.
What I can do is to make it very transparent on how data is managed.
The files content are appended to the context builder, then the context and messages are processed through OpenRouter, which is a provider that offers APIs to all the AI models, and the output generated (and the account data) is stored on a secured database on Mongodb platform.
Itβs all defined in the privacy policy here: https://contextch.at/docs/privacy-policy/index.html
iankp
last Friday at 10:44 PM
Isn't NotebookLM already exactly web and file context (a "ContextChat")?
Edit: I assume it is basically a similar product, but your differentiators are mainly the customer getting to choose their model, and you getting to write your own context adding ergonomics (like adding links from a Sitemap)?
tapeo
last Friday at 10:58 PM
Exactly, similar plus tools to import and manage projects context fast (like GitHub private repos and sitemaps url), multiple ai model and pay per use like using APIs
imranq
last Friday at 10:00 PM
Nice idea!
I think it would be better if it was just context and not connected to any model. Think of one place where you can hook in your drive folder, GitHub, etc. and have it produce the best context for the task you want to achieve. Then users can copy that to their model or workflow of choice
tapeo
last Friday at 10:05 PM
Thank you, this could be a cool feature too add! For example the ability to click a link that redirects to other chat services with your project base context you built and optionally all the messages sent until there
causalmodels
yesterday at 12:39 PM
This is really nice. Any chance you have conversation branching on the roadmap?
fernly
yesterday at 12:39 AM
Compare to Claude Projects?
https://www.anthropic.com/news/projects
tapeo
yesterday at 6:27 AM
For what I can see it doesn't offers the flexibility of importing content from a detailed sitemap or private GitHub repositories in a fast way (and more tools to come).
Then it doesnβt has the possibility to switch to different AI models plus you have to pay a monthly subscription.
J_cst
yesterday at 11:20 AM
I use RooCode and find it quite effective with the ability to switch agents and models within the tasks currently. I recently moved from Cline to RooCode.
tapeo
yesterday at 11:27 AM
Yes could be an alternative if you only need it on a development environment
ta988
yesterday at 2:46 PM
10% fee over the openrouter fees. That's fees all the way down.
tapeo
yesterday at 3:00 PM
You're right, I have in roadmap to optmize it by using directly the providers APIs. Would you prefer different pricing like lifetime license?
scottward
last Friday at 10:44 PM
Cool! I was excited when I saw this and signed up.
One key thing I was hoping for was a consistent resync with source material particularly google docs. Looks like I'll have to download then upload to your app whenever they change.
Is that right? Auto syncing in the plan?
tapeo
last Friday at 10:59 PM
Auto syncing added to the plan!
scottward
last Friday at 11:14 PM
Cool. One option is just to integrate with make/n8n/zapier so I could a) trigger on doc changes and then b) upload (and overwrite) the doc in your app
tapeo
last Friday at 11:19 PM
Yes this sounds very useful and productive. Having project context updated based on external events. I will share it on socials when ready, thanks for the feedback!
scottward
last Friday at 11:38 PM
Sure! Email your users too - since I'm one of them I'll get the email. :)
tapeo
last Friday at 11:42 PM
Will do thanks again!
tapeo
last Friday at 8:46 PM
I tried to solve my own problems that I had while copy and pasting the same starting context from chat to chat. Now I can generate the base context and start new chats from there.
ramoz
last Friday at 9:41 PM
Yes def a needed thing for power users.
You and I are going to end up competing because im evovling my original solution in this space, https://github.com/backnotprop/prompt-tower ... best of luck, great execution thus far.
tapeo
last Friday at 9:48 PM
Thank you, will take a look at your software, competition is always good
argestes
yesterday at 9:23 AM
This is nice. I would love to be in a mailing list regarding updates
artichaud1
last Friday at 9:51 PM
Love this. I will give it a try. Beautiful landing page as well.
tapeo
last Friday at 9:57 PM
Thanks so much, if you try it out feel free to leave feedback if you want!
esafak
yesterday at 12:29 AM
Its UX looks similar to You.com
pelagicAustral
last Friday at 10:58 PM
edit: whoops... commented on the wrong tab... nevermind, but Godspeed.
tapeo
last Friday at 11:00 PM
I appreciate it, thanks and keep building
dangus
last Friday at 9:43 PM
So now our jobs are shifting from doing work, to telling the AI to do work, so now we need management tools to better manage how we are telling the AI to do work.
I must have taken a turn to the wrong timeline.
icelancer
last Friday at 10:36 PM
Yeah. That's how it works with employees, too.
tapeo
last Friday at 9:53 PM
New tools for a new kind of work!
You give instructions as someone who can do the job themselves.
That ability will decay of course and you will be managing with the best of them. Eh, I mean the worse :)
alexpham14
yesterday at 3:15 PM
[dead]