Interesting effort—although I can’t say the current “daily brief” inspires confidence in the AI’s credibility:
> ”On the domestic front, the House passed a $26 billion spending package that funds both disaster relief and $28 billion in border enforcement”
If my math is right, that leaves -$2bn for disaster relief… but I have nothing to click through to understand where it’s getting its false information from, so I don’t know what exactly it’s misstating.
I’m not sure what qualifies the 5 random headlines as “trending,” or what Govbase is adding to them. I’m also a little confused by the “impact analysis.” Consider, for example,
> ”Congress Proposes New Cybersecurity Rules and Grants to Protect Hospitals from Cyberattacks”
It lists its highest “impact score” of “+39” for “Tribal Member,” presumably because tribal hospitals are eligible for the modernization funds that supposedly will materialize after the regulations get written. But… I dunno, it’s not clear to me that CISA developing cybersecurity standards is “impactful” on a tribal member’s health from either direction: either that it’s more impactful for tribal members than for other groups, or that health cybersecurity is more impactful than other issues from assumptions about tribal members’ perspectives.
The detailed score view leaves me scratching my head even more: what is the basis for the claim that “tribal members”’ sentiment is more positive about this than “chronic illness”-havers’?
Not to get too hung up on it, but to further pull on the thread of what how the AI seems to be making specious assertions—The only mention I could find of the IHS in that bill is in the section saying “anybody remotely in the healthcare universe is eligible for these grants.” They’re enumerating the types of legal entity that “are healthcare,” and the IHS is only name-checked because that’s how one arm of the healthcare system happens to be administered. And maybe I’m wrong, maybe it is noteworthy and unusual to include them!
Either way, context is everything. The full context for any of these issues really needs a domain expert’s experience and analysis—ideally more than one of them.
Right now the AI is speaking in a credulous, factually-authoritative kind of tone about all this. It’s taking the bill’s text at face value, and presenting its own judgment and interpretive color. In statutory language particularly, there’s an awful lot of daylight between the text and the subtext an awful lot of the time.
Impacts are always mixed, rarely without a complex mix of second-order effects, and controversial even between the people drawing up the laws. Take your summary of the bill withdrawing energy-efficiency rebates for home appliances. Its proponents have actual arguments for why they think that money’s ill-spent. And I think they’d argue that the impact on homeowners isn’t -37 AI points, but that the impact is no longer having to subsidize others’ preferences (or something like that).
Now, THAT might be interesting—allowing me to lay out my own frameworks for how I think about policy issues, and asking the AI to analyze from my preferred point(s) of view…
Still—even that lives in the nuance. I’d be much more interested in it summarizing facts (accurately)—dollars, legal requirements, authorities, reporting mandates,… that seems like the most a AI might reasonably get from the plain language of the text.
I’d also value good summaries of qualified experts’ analyses (even ones with dogs in the game). Those, with citations, could be more helpful than the AI’s opinions, as far as helping me triangulate what we’re actually talking about on each of these topics.
Congratulations on an ambitious project!