\

A Claude Code and Codex Skill for Deliberate Skill Development

33 points - today at 3:13 AM

Source
  • neuralkoi

    today at 8:04 AM

    I'm not familiar with Skills, but looking at the repo I find the amount of decorative code/text as overkill for what amounts to just the following prompt in a bash script (yikes) executing after a commit is run:

        {"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"[learning-opportunities-auto] The user just committed code. Per the learning-opportunities skill, consider whether this is a good moment to offer a learning exercise. If the committed work involved new files, schema changes, architectural decisions, refactors, or unfamiliar patterns, ask the user (one short sentence) if they'd like a 10-15 minute exercise. Do not start the exercise until they confirm. If they decline, note it — no more offers this session."}}

      • alexhans

        today at 8:13 AM

        Skills are just a good standard to describe repeatable workflows saving context through progressive disclosure, prompt sharing and, very underused feature, also bound the non deterministic parts with determism (which could be scripts).

        Conceptually, you should treat them as incremental software instead of magic you grab from others [1]

        The killer feature is that coding harnesses tend to have SkillBuilder agent skills so creating them becomes very easy and you can evolve them.

        I recommend you build your own for your particular pain points.

        Very simple example [2] showing what another user mentioned around "evals" so that you can really achieve good enough correctness for your automation.

        - [1] https://alexhans.github.io/posts/series/evals/building-agent...

        - [2] https://alexhans.github.io/posts/series/evals/sketch-to-text...

    • aledevv

      today at 8:23 AM

      What exactly is the "adaptive dynamic textbook approach"?

      Examples?

      >Generation effect: Accepting generated code and decreasing generating one's own code can skip the active processing that builds understanding.

      Holy truth.

      • zihotki

        today at 8:01 AM

        No benchmarks and evals present, how do you know it produces better result than /create-skill ? Naive testing doesn't provide any confidence

          • schnitzelstoat

            today at 8:13 AM

            I think it means human skill development. It offers learning opportunities to the user.

            > When you complete architectural work (new files, schema changes, refactors), Claude offers optional 10-15 minute learning exercises grounded in evidence-based learning science. The exercises use techniques like prediction, generation, retrieval practice, and spaced repetition to provide you with semi-worked examples from across your own project work.

            Confusing name though.

        • romanoonhn

          today at 6:27 AM

          Looks interesting! I know it's easy to setup and test it but I'm on mobile current so I think it'd be great if there was full-interaction example to better understand how it works.

          • Mashimo

            today at 6:53 AM

            Mhh, interesting.

            I want to learn Java spring, and probably let ai help me / quiz me. I will take a look into the skills for inspiration.

              • ramon156

                today at 7:55 AM

                Is there a reason why making a spring app and learning hands-on is not feasible?

                I know I sometimes get demotivated mid-way, but that also tells me it might not be worth the investment

            • akakabrian

              today at 3:50 AM

              [dead]