whartung
yesterday at 9:01 PM
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
nlawalker
yesterday at 9:08 PM
Typing as a service is a whole cottage industry on Etsy.
ssl-3
yesterday at 10:39 PM
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
vunderba
yesterday at 9:42 PM
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
eichin
yesterday at 10:15 PM
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
djmips
yesterday at 10:18 PM
In general I love the idea of turning printers into typewriters. I've been thinking about how to do it with an inkjet printer.
tejtm
yesterday at 9:38 PM
arms race....
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
vunderba
yesterday at 9:45 PM
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
https://en.wikipedia.org/wiki/Loebner_Prize
djmips
yesterday at 10:20 PM
Wow it feels like the Loebner prize went away right at the dawn of the LLM. Is it correlated?
vunderba
yesterday at 10:30 PM
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
artikae
yesterday at 11:48 PM
Goodhart's Law vs the Turing Test! Can our humans accurately evaluate intelligence, or will they be fooled by fakes? Live this Sunday!
>because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Isn't that really what all these AI companies are doing too? It sure seems like it is.
djmips
yesterday at 11:25 PM
I think it would be great to be revived with a different premise.
Moonye666
today at 1:25 AM
[dead]