Nice! Good to see some tooling in this space explicitly designed for simplicity and user-friendliness.
One practical problem to consider is the risk of those distributed bundles all ending up on one or two major cloud provider's infra because your friends happened to store them someplace that got scooped up by OneDrive, GDrive, etc. Then instead of the assumed <threshold> friends being required for recovery, your posture is subtley degraded to some smaller number of hacked cloud providers.
Someone using your tool can obviously mitigate by distributing on fixed media like USB keys (possibly multiple keys to each individual as consumer-grade units are notorious for becoming corrupted or failing after a time) along with custodial instructions. Some thought into longevity is helpful here - eg. rotating media out over the years as technology migrates (when USB drives become the new floppy disks) and testing new browsers still load up and correctly run your tool (WASM is still relatively new).
Some protocol for confirming from time to time that your friends haven't lost their shares is also prudent. I always advise any disaster recovery plan that doesn't include semi-regular drills isn't a plan it's just hope. There's a reason militaries, first responders, disaster response agencies, etc. are always doing drills.
I once designed something like this using sealed paper cards in identified sequence - think something like the nuclear codes you see in movies. Annually you call each custodian and get them to break open the next one and read out the code, which attests their share hasn't been lost or damaged. The routine also keeps them tuned in so they don't just stuff your stuff in an attic and forget about it, unable to find their piece when the time comes. In this context, it also happens to be a great way to dedicate some time once a year to catch up (eg. take the opportunity to really focus on your friend in an intentioned way, ask about what's going on in their life, etc).
The rest of my comments are overkill but maybe fun to discuss from an academic perspective.
Another edge case risk is of a flawed Shamir implementation. i.e. Some years from now, a bug or exploit is discovered affecting the library you're using to provide that algorithm. More sophisticated users who want to mitigate against that risk can further silo their sensitive info - eg. only include a master password and instructions in the Shamir-protected content. Put the data those gain access to somewhere else (obviously with redundancy) protected by different safeguards. Comes at the cost of added complexity (both for maintenance and recovery).
Auditing to detect collusion is also something to think about in schemes like these (eg. somehow watermark the decrypted output to indicate which friends' shares were utilized for a particular recovery - but probably only useful if the watermarked stuff is likely to be conveyed outside the group of colluders). And timelocks to make wrench attacks less practical (likely requires some external process).
Finally, who conducted your Security Audit? It looks to me as if someone internal (possibly with the help of AI?) basically put together a bunch of checks you can run on the source code using command line tools. There's definitely a ton of benefit to that (often the individuals closest to a system are best positioned to find weaknesses if given the time to do so) and it's nice that the commands are constructed in a way other developers are likely to understand if they want to perform their own review. But might be a little misleading to call it an "audit", a term typically taken to mean some outside professional agency is conducting an independent and thorough review and formally signing off on their findings.
Also those audit steps look pretty Linux-centric (eg. Verify Share Permissions / 0600, symlink handling). Is it intended development only take place on that platform?
Again, thanks for sharing and best of luck with your project!