After playing around with Claude this week, I'm worried that LLMs are stripping us of all those idiosyncrasies that make us interesting as people. Are we all being "LinkedInified" by our AI creations?
Yes, I had assumed this would be the case when a quite cool corporate ethics person I know updated his, and it was definitely smoother, but a bit bland. So I then thought that soon we would all be bored of the blandness and the pendulum would swing back.
But then you made me think - is the recruitment AI only looking for conformity, so your great humanly-crafted quirky profile which an actual person might think - hmm she/he looks interesting I'll put her forward, the AI will think - too much superfluous info and not enough of the important (ie bland) info, they do not go through?
Which bring me to my other beef of the week which is all the lovely young people sending hundreds of CVs out which are judged by AI and rejected - why the fk can't the AI just send an automated reply saying 'sorry we got your CV but not this time', so they don't have to wait. Better still, the reasons why the AI has decided you haven't got through and helpfully tell you that you could do x Y and Z next time and that might help? This surely will be very easy and respectful and human and make the company not look like the soulless lazy etc etc that they are?
It's basic programming and I don't know why it doesn't happen. Maybe it does, but certainly not to my nephew who sent hundreds out and didn't get one single reply automated or not. Until he got a great job through meeting someone randomly, and is now doing stormingly well in his field.
Whenever I use AI platforms to help with my LinkedIn profile and résumé, without exception, everyone literally spews out corporate buzzwords and reduces my personality to conventional, predictable and a « Safe pair of hands » - especially for Board of Directors and Advisory Board purposes. How does one tick all these ATS boxes that headhunters demand AND retain colour, creativity and diversity - or is that the very reason that recruitment is so fundamentally broken… ironically, broken by LinkedIn, the recruitment platform! Linkedinifaction = Enshittification - good call!
This is very much a part of this, yes. But at some point an anti prompt-injection strategy comes down to pattern matching against expectations - and convention - which gets us back to alignment with norms, however they are defined.
In this case an LLM will not know that the text is not visible to humans unless it is an incredibly suspicious AI. Instead, it pattern matches what it reads against what would be considered normal, and filters out stuff that diverges
It's easy to probe by asking them (unless they are also being trained to dissemble!) — all the indications I have had are that, when looking at web pages, they strip away extraneous material like CSS, meta data, and a while lot more, and then just parse the text content. Which means most will probably not have much knowledge of text color, font size, or disparities between box size and text content — all of which can be used to hide text from humans
Yes, I had assumed this would be the case when a quite cool corporate ethics person I know updated his, and it was definitely smoother, but a bit bland. So I then thought that soon we would all be bored of the blandness and the pendulum would swing back.
But then you made me think - is the recruitment AI only looking for conformity, so your great humanly-crafted quirky profile which an actual person might think - hmm she/he looks interesting I'll put her forward, the AI will think - too much superfluous info and not enough of the important (ie bland) info, they do not go through?
Which bring me to my other beef of the week which is all the lovely young people sending hundreds of CVs out which are judged by AI and rejected - why the fk can't the AI just send an automated reply saying 'sorry we got your CV but not this time', so they don't have to wait. Better still, the reasons why the AI has decided you haven't got through and helpfully tell you that you could do x Y and Z next time and that might help? This surely will be very easy and respectful and human and make the company not look like the soulless lazy etc etc that they are?
It's basic programming and I don't know why it doesn't happen. Maybe it does, but certainly not to my nephew who sent hundreds out and didn't get one single reply automated or not. Until he got a great job through meeting someone randomly, and is now doing stormingly well in his field.
Whenever I use AI platforms to help with my LinkedIn profile and résumé, without exception, everyone literally spews out corporate buzzwords and reduces my personality to conventional, predictable and a « Safe pair of hands » - especially for Board of Directors and Advisory Board purposes. How does one tick all these ATS boxes that headhunters demand AND retain colour, creativity and diversity - or is that the very reason that recruitment is so fundamentally broken… ironically, broken by LinkedIn, the recruitment platform! Linkedinifaction = Enshittification - good call!
Aren't you just running into the anti-prompt-injection measures that the various companies have implemented? At least Claude explained them to you.
This is very much a part of this, yes. But at some point an anti prompt-injection strategy comes down to pattern matching against expectations - and convention - which gets us back to alignment with norms, however they are defined.
In this case an LLM will not know that the text is not visible to humans unless it is an incredibly suspicious AI. Instead, it pattern matches what it reads against what would be considered normal, and filters out stuff that diverges
At ArXiv, we now check for invisible text. I would be astonished if the commercial LLMs were not also doing this
It's easy to probe by asking them (unless they are also being trained to dissemble!) — all the indications I have had are that, when looking at web pages, they strip away extraneous material like CSS, meta data, and a while lot more, and then just parse the text content. Which means most will probably not have much knowledge of text color, font size, or disparities between box size and text content — all of which can be used to hide text from humans
But there may be more going on here ...