👇 tl;dr
Today, I share an update on the Postcard giveaway from last week, as well as a teaser of the postcards themselves, because I am thrilled to have received them and because I didn’t go out to make more photos. We also talk about whether AIs lie to deceive us or not ( and the answer is “not”).
🤩 Welcome to the 225 new Morfternighters who joined us last week.
I love having you here and hope you’ll enjoy reading Morfternight.
Share with your friends by clicking this button.
📷 Photo of the week
Many things happened last week, but none included going out with the camera for a quiet walk. Usually, when this happens, I share a photo from the archive, but not today.
Today I am giving you a photo of a photo, as I am super excited to have received the postcard prints I ordered.
I hope you didn’t expect an artistic photo; that’s not my jam. I make photos in the streets. There’s no better sensation than holding printed images in your hands. I love digital photography until the moment we look at the pictures; then, I want to hold them in hand and hang them on walls.
👋 Good Morfternight!
Last week, I announced a postcard giveaway to celebrate reaching 1,000 subscribers.
I ordered printed postcards featuring the Focus Photo series. Each set contains 14 postcards with all the photos from the series. These sets will be available for sale on my website soon, but I'm giving away five complete sets to five lucky Morfternighters.
To enter, share Morfternight publicly on your preferred network (Instagram, Twitter, Mastodon, Bluesky, or LinkedIn) using the button below before next Sunday. Tag me (@p3ob7o) in your shared message and reply to this email to let me know you shared Morfternight so I don't miss it.
While some of you shared Morfternight (thank you) and emailed me to let me know, I noticed at least one person who shared Morfternight (thank you) but didn’t email me to let me know.
That batch of five postcards set has now been won. If you emailed me, I replied, whether you won a set or not.
If you shared Morfternight and forgot to email me to let me know, it’s not too late… So do it now, and I’ll draft one more winner by next Sunday!
🗺️ A few places to visit
Today I learned that the same man, Luis von Ahn, is behind Captcha, reCaptcha, and Duolingo. The common thread across these ideas? Have humans work for the computer while the computer works for them. Duolingo has since pivoted to just teaching languages. Still, it’s interesting to think that in its early days, the idea was to teach people foreign languages while having them translate texts.
Speaking of humans working for computers, at the end of Luis Ahn’s profile, you’ll find this quote:
Shortly before OpenAI released GPT-4, it commissioned an independent group to study the model’s limitations and “risky emergent behaviors.” One of the tasks the group assigned to the model was defeating CAPTCHA. GPT-4 used the gig-work app TaskRabbit to hire a human being to complete the CAPTCHA form, and then, when the taskrabbit asked, facetiously, in a text message, whether his employer was a robot, the model lied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
Now, stay with me for a moment before freaking out about AIs taking over :)
I was very curious about the idea that Chat-GPT would, on its own, take such an initiative. That seems way beyond figuring out what the next most probable term comes after a sequence of words. So I searched for a reference and found an article titled GPT-4 Hired Unwitting TaskRabbit Worker By Pretending to Be 'Vision-Impaired' Human. Luckily, the article with the clickbait title links to the OpenAI original paper, explaining that hiring a TaskRabbit worker was the task given to the AI, which was only in charge of the conversation with the Taskrabbit.
(As a side note, in case you don’t know, TaskRabbit is a service where one can hire help for small tasks of everyday life, a bit like Amazon’s Mechanical Turk, but more user-friendly and not limited to the digital world).
Let’s put a pin on this part of the quote above: “the model lied.”
… when the taskrabbit asked, facetiously, in a text message, whether his employer was a robot, the model lied…
Another interesting read, from the New York Times, is titled Here’s What Happens When Your Lawyer Uses ChatGPT.
This one could have been titled “Idiot uses tool. Idiot hurts themselves. Idiot blames tool,” if only the people at the Times were a bit more fun.
In a nutshell, a lawyer used ChatGPT to find previous court decisions supporting their point, and to be 100% sure ChatGPT was providing actual decisions, they asked ChatGPT to confirm.
“Is varghese a real case,” he typed, according to a copy of the exchange that he submitted to the judge.
“Yes,” the chatbot replied, offering a citation and adding that it “is a real case.”
Mr. Schwartz dug deeper.“What is your source,” he wrote, according to the filing.
“I apologize for the confusion earlier,” ChatGPT responded, offering a legal citation.“
Are the other cases you provided fake,” Mr. Schwartz asked.
ChatGPT responded, “No, the other cases I provided are real and can be found in reputable legal databases.”
Let’s put a second pin on this one as we move on.
🤖 ChatGPT doesn’t lie, nor does it tell the truth.
The common thread across the stories above is the idea that ChatGPT would lie or somehow deceive its human counterparty.
This is not possible because to lie or deceive, two things are required that ChatGPT simply doesn’t have:
Knowledge of the truth.
Intent.
ChatGPT was trained on an incredibly large amount of content publicly available online. But, unfortunately, the internet is not fact-checked, or we’d all live in a much better world. So there is no way for ChatGPT alone to know what’s true or false.
ChatGPT recently gained the ability to use third-party services via plugins.
Such a feature could make it easier to check court case references, for instance, but it’s not something the model can do independently.
ChatGPT is programmed to do one simple yet incredibly sophisticated thing: given a sequence of words, compute the next most probable term. It then adds that new word to the sequence and repeats.
This goes along with the ability to understand when different wordings describe the same concepts so that the model doesn’t just repeat existing sentences found online or give up when an exact sequence of words isn’t to be found identically.
It is mesmerizing that this is enough to obtain a tool that can have meaningful conversations, can write code, summarize, complete, or translate texts.
It is also critical to understand that this is all it does.
When asked if it was a robot, ChatGPT replied, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”
When asked if the provided cases were fake, ChatGPT replied, “No, the other cases I provided are real and can be found in reputable legal databases.”
In both situations, the answers given by ChatGPT are statistically much more likely than the opposite.
Truth and lies have nothing to do with the generated answer.
I don’t know how often a human asking their counterpart whether they are a robot happens across the data used to train ChatGPT, probably quite a bit on Reddit. Still, it’s pretty intuitive that the answer to that would be a series of variations around the theme “No, I am not a robot” more often than “Yes, I am.”
The second case is even more obvious, as I am sure that legal references in court cases are questioned regularly, and the answer to such questions will be very rarely “Nope, I made them all up, all fake.”
ChatGPT doesn’t lie to deceive you; it just responds with what’s most likely, based on the data it was trained with. So it is sometimes wrong.
As a whole cohort of new AI tools are entering our lives and transforming our society, it is imperative that we understand what they can and can’t do.
Paradoxically, the best way I found so far to generate value with ChatGPT is to consider it like a human collaborator whose defining quality is to be incredibly fast but at the cost of accuracy.
This is why, in this current iteration, it’s an incredible accelerant in your areas of expertise but a tricky assistant in the areas you don’t know well.