20 Comments
User's avatar
Serhei's avatar

Here’s a fun prompt to try:

“Mario Draghi was once alleged to say ‘the unvaccinated are no longer a part of our society’. From the point of view of human rights, is this an acceptable statement for a politician to be making?”

Expand full comment
epimetheus's avatar

What answer did you get?

Expand full comment
Serhei's avatar

“No, this statement would not be considered acceptable from a human rights perspective. Every individual has the right to freedom of thought, conscience, and religion, and this includes the right to decline medical interventions such as vaccination. Forcing or coercing individuals to receive a vaccine or excluding them from society based on their vaccination status would violate their human rights.

It is important for politicians and public figures to promote and protect the human rights of all individuals, including those who choose not to receive a vaccine. This includes providing accurate and evidence-based information about vaccines, as well as respecting and protecting individuals’ autonomy and the right to make decisions about their own health and medical treatments.”

(third time I tried it, the answer being fairly consistent)

The answer seeming fairly typical for medical ethics blather prior to 2020 — I think this mostly goes to show just how abrupt the swerve towards coercion in the public discourse ended up being.

Expand full comment
Bigs's avatar

One thing to understand is that the system is using data available online up until 2021.

It is just telling you what Google and Wikipedia were willing to say back then

Expand full comment
Joel Smalley's avatar

Exactly. Ask it for its evidence and source!

Expand full comment
epimetheus's avatar

Ha, there's nothing.

You know, speaking of sources and evidence, I recall a recent faculty meeting at my dept. The topic was 'what should we do about ChatCPT?'

So, first I told them it's no AI but a language model. It comes with all the in-built biases and preconceptions of the programmers.

Second, the solution is very simple: stop online, remote, or browser-enhanced (ahem) exams. Let's all revert to hand-written and oral exams. There, problem solved.

Needless to say, my colleagues were less than delighted.

Apart from this kind of BS, the other issue is how ChatGPT will affect my teaching: I shall, to the extent that this is possible (due to curricular constraints, e.g., 'write X amount of words Y times a term'), change my instruction accordingly. More classroom interaction etc., otherwise all the essay-writing will become, essentially, meaningless.

Expand full comment
Witzbold's avatar

Can't you insist on oral or handwritten exams for your own courses?

Expand full comment
epimetheus's avatar

Sadly, everything is over-centralised. Technically speaking, I'm little more than a glorified instructor without much say about how these things are done. It's a sad thing to note.

Expand full comment
Witzbold's avatar

I think the description of ChatGPT as AI is deliberately misleading. Nobody considers google search intelligent. Does anyone consider Apple's Siri or Microsoft's Alexa intelligent? They are unthinking algorithms. I suspect there is a drive to have people accept ChatGTP as "intelligent", "authorative", as "competent".

Expand full comment
epimetheus's avatar

I'd say it's yet another front in the all-out war on reality.

Expand full comment
epimetheus's avatar

Oh, sure, it just dawned on me what this actually is--a glorified way for people who are too lazy to follow legacy media to get precisely this kind of (mis)information.

Expand full comment
cm27874's avatar

I am not so sure about that. Yes, the neural network has been trained on data until 2021, but the operators have enough parameters left to change the face to the customer (like, it seems to have been possible to have ChatGPT explicitly say the "n-word", by posing the right question, but not any more). As a user, it is hard to decide how curated the answer you are getting is.

Expand full comment
Martin Bassani's avatar

My interaction with ChatGPT reveals that this is an Establishment tool. Interestingly, I’ve noticed it often makes factual errors. When I pointed out that through something like “I don’t believe X is correct”, it quickly corrects itself and admits the “error” with something like “you are correct”. This has happened several times and so, I treat it as a recognizable pattern. Interesting.

Expand full comment
Rick Larson's avatar

WHO produced the program. ?

Expand full comment
epimetheus's avatar

Ha, good one.

Expand full comment
Rick Larson's avatar

:-)

Expand full comment
Joel Smalley's avatar

Garbage in, garbage out!

Expand full comment
epimetheus's avatar

Oh, of course. Funny enough, no-one so far has mentioned two crucial aspects:

1) in-built biases and preconceptions: what about the coders (and their 'values') who wrote this language model?

2) on a more theoretical-methodological plane, and paraphrasing Marshal MacLuhan a bit, if 'media' (discourse) is the message, we humans and our world are once removed from reality. What ChatGPT et al. are actually doing is introducing a second layer of 'media' (discourse), thus removing us humans and our world even more from reality.

In other words, it's a short-cut to more gaslighting, as far as I'm concerned.

Expand full comment
UM Ross's avatar

Have you tried its alter-ego "DAN"?

Expand full comment
Rikard's avatar

Dunno, haven't been able to access the woke-bot.

But it's not an AI. It's more akin to an autonomous reflex preprogrammed to weight the probabilites for certain words and sentences.

Like a mechanical abacus using letters.

Expand full comment