Wednesday, December 17, 2025

SearchResearch (12/17/25): Control-F for reality--when it works / when it doesn't work

What do you do with too much tea? 

[a massive amount of tea pour from a large flowery cup] 
P/C Gemini

Answer: Spill it, obviously.  

I was in the grocery store just before closing, searching for a particular kind of decaf tea that my daughter wanted.  I had the thought that I'd try the "Control-F for reality" idea that I wrote about in my previous post.  

I took a couple of photos of the tea shelf.  Here's what I got, a very full grocery rack of tea.  


I was looking for a particular kind of decaffeinated tea, so I prompted Gemini with:

      [which of these teas are decaffeinated?] 

Response: 



Which is pretty reasonable--it not only tells me which are decaf, but even where they are on the shelf ("located on the far left of the shelf").  Nice.  

It EVEN found caffeine-free options that are not explicitly labeled as such (e.g., "chamomile citrus" or "turmeric ginger").  

I was thinking this was a smash success.  

But you know me--I have to double check these things before I believe them.  Pro tip: You should always check too.  

That got me to thinking--what would the other AIs do with this simple task?  So I tried the same task with ChatGPT, Perplexity, and Claude.  Strangely, the results are very variable.  

Gemini: found 8 decaf teas 
ChatGPT: found 4 decaf teas 
Perplexity: found 3 decaf teas  (but it warns that ",,,they should be 
assumed caffeinated unless their individual packages elsewhere state “decaf” or “caffeine free.”) 
Claude: found 1 decaf, 4 caffeine free 

So... which is it?  Why so much variation?  

To compare the different AIs, I thought I'd put all of the results into a spreadsheet so I could see them all side-by-side.  

My next prompt was to: 

 [please make a list of each of the teas on this shelf. Each line of the list should show the name of the company, the name of the tea, and if it's caffeinated or not. Please create a Google sheet with this data.]  

It gave me good data, but would NOT put it into a Sheet.  (How odd is that?  But see below for more info on this...)  But it DID give me a CSV block of text with what I was looking for--easy to copy/paste into a new sheet.  


(Note the "copy this" icon on the far right of that screencap--looks like a double rectangle: clicking on that copies the CSV text so you really can go to the spreadsheet and paste it.)  Here's that spreadsheet (check out the tabs):  


Notice that Column C ("Caffeine Status") lists some teas as Caffeine-Free and others as Decaffeinated. I finally noticed that "Decaffeinated" teas have had the caffein removed while "Caffeine-Free" teas never had caffeine in the first place.  They're all herbal and without caffeine at all.  

BUT... In this spreadsheet, Gemini claims there are 38 different teas, 14 of which are decaf.  Interesting! Seconds before, when I asked directly ("which of these teas are decaffeinated?") it only gave me 4 decaf and 4 caffeine-free.  

That's pretty funky.  

If you ask a question one way you get an answer of 8, but when you ask for the details, you get 14.  What's going on here?  How did it find an additional 10 decaf teas?  And, strangely, when you ask for the teas listed in a CSV form, listed by company and caffeine-status, then drop that into a spreadsheet, you get very different answers.  

So now I thought I'd get the other AIs answers in a spreadsheet as well.

Here's ChatGPT's sheet: 


Notice any differences between the two sheets of Gemini and ChatGPT? 

First off, Gemini lists "Mighty Leaf Organic Breakfast" as one of the teas, but ChatGPT misses it.  (There are more diffs.) 

Comparing the differences in spreadsheets created by each: 


That's a very weird result.  If you ASK an AI how many teas there are, you get one answer BUT if you ask it to create a spreadsheet, it gives you a much larger number!  

EVEN STRANGER... after not working on this blog post for several days, I went back to Gemini, re-uploaded the image and re-asked all of the questions above--including "create a spreadsheet."  Voila!  Today it knows how to create a Google Sheet.  Even better (and weirder), this time it found 58 teas, 18 of which are decaf.  That's 20 more teas than last time!  

Key insight #1:  
So.. your answer varies from AI to AI AND it varies if you ask it directly ("which teas are decaf?") vs. asking for a CSV list to drop into a spreadsheet.  Again, all the results are VERY different.

And--no surprise--there are some errors here.  None of the AIs found the Blue Lotus Chai or the Builder's Tea (second shelf from the bottom).   If you were doing Control-F for "Blue Lotus Chai," you'd be out of luck.  

ALL of this was an odd result, so I went back and took a higher-resolution image of the tea shelves and found that it COULD see the Blue Lotus Chai and Builder's "People's Tea."  

Key insight #2: You need to have fairly high-resolution images to get decent results.  EVEN SO... you'll get variable results depending on how you  ask the question ("just ask" versus "give me a spreadsheet").  Asking for a spreadsheet always gives a better answer.  

Key insight #3: Most of the AIs won't tell you that they're having problems scanning the image for labels.  (To their credit, Perplexity and Grokker told me that "Cannot reliably read and extract every tea name and company from the photo.")  But, significantly, both Gemini and ChatGPT never said anything about not having enough resolution to be confident in their results.  

And that tells you something: It's clear that all of the results of the image analysis by all of the AIs has some internal confidence measure, and they won't show results when the confidence is too low.  That makes sense.  But to not say anything about uncertainty is just malpractice.  At very least, the AI should say something like "I'm not really sure about these results..."  

What to do?  I asked both Gemini and ChatGPT a simple validation question:  

[were you able to capture all of the teas in the image?] 

In both cases the AI was able to look a little bit more carefully.  Gemini found the Blue Lotus chai (and a few others that it had missed the first time around)!  ChatGPT told me that "I captured most of the front-facing teas, but a few items weren’t fully captured with readable labels, and a couple of my earlier caffeine calls were overconfident because the tea type (black vs rooibos/herbal vs decaf) isn’t legible on some tins."  And then it gave me a newly updated spreadsheet which listed 55 different teas.  

Note that the interface looked like this... 


You might think that this means it found only 14... but you would be wrong.  IF you click on the download button (downward pointing arrow in a circle, upper right), you'll find that the full spreadsheet has 55 teas listed, along with the updated assessments about whether they're caffeinated or not.  

Bottom line: You really have to work at this to get a good analysis of an image.  The different AIs will give you very different answers... and will even work hard, if you ask them to. 

SearchResearch Lessons 


1. AIs are variable in quality and detail.  As you see, different AIs give very different results.  Your accuracy and quality will vary by which you use. 

2. Beware of asking an AI to make an inference for you.  The difference between "decaf" and "non-caffeinated" might be subtle, but the AI doesn't know that it's opaque to you. 

3. Ask for all of the details in a spreadsheet if you want to compare or validate the results.  Notice that just asking for the tea gave us pretty poor results, while asking for all of the data in a spreadsheet format gave MUCH better results.  When you're looking for details on a task like this, request all of the data. 

4. Different AIs give you different answers, and will give you different answers if you ask in slightly different ways (including "think harder"). Be cautious.  


Keep searching!  





Wednesday, November 26, 2025

SearchResearch Method: Control-F for reality (finding books on your shelves)

 Ever lose a book on your shelves? 


I spent an hour tearing my personal library apart in a desperate search.  Ever happen to you? You know, the search for the one book you know you have, but can't find? 

Happens to me all the time.  I have several bookshelves, totaling around 200 linear feet of books (61 meters).  And that's not even counting the bookshelves in secondary storage in the garage.  

So, like many of you, I find myself searching my personal stacks for a book by hand, one at a time.    

This seems like a classic SearchResearch problem.  There must be a better way.  

Yes, I could create a personal card catalog or personal book database.  And, admittedly, making such a thing used to be a huge hassle.  (It's much easier these days with personal catalog apps Libib or LibraryThing.)  

New Solution--Use AI to Search Your Shelves: I was playing around with Gemini's text recognizer the other day when it occurred to me that maybe we could use Gemini to scan our bookshelves.  

Here are two images of MY book collections.  (Don't judge me for neatness, organization or content!)  

Yes, I know there's a box labeled "Books to Read."  Don't judge. 


Here's what I did to allow me to Control-F for a book on my shelves.  Just uploaded the images, and asked:  


That was pretty damn impressive!  

If you look at the images, the text on the spine of "Field Guide.." is partly hidden by the book above.  

Not only did the AI find the book, but Gemini also gave me directions to the book ("..top shelf, far right-hand side, third book down, with a blue spine, directly underneath Birds of the San Francisco Bay Region").  

That gave me the notion to ask more about this collection of books. 


This list is complete (I checked!) and as we saw, Gemini gives general directions to the locations of the piles and shelves.  Here you see "Image 1 (Wooden shelves)", but later on Gemini tells me where the other books on with directions like "Top Shelf (center horizontal stack)" and "Stacks and boxes (left stack)."  

That's about as good of directions as you can expect.  

What's more, you can ask questions about your collection: 




Or you can ask about your book organization: 



And you can ask for some personal reflection... what does Gemini think about you as a reader? 



FWIW, it seems Gemini gave me a pretty accurate analysis of my reading habits as seen by these shelves.  It noted that I am: 

"...obsessed with how humans organize and find information" along with "...you appear to be an academic or a specialized researcher (possibly in Computer Science or Cognitive Science) who is deeply concerned with the "User Experience" of reality. You want to know how to navigate the flood of information in the digital age without losing touch with the biological reality of the physical world."

Gemini kindly concludes this bit of analysis with a suggestion:  

"Recommendation:  Entangled Life by Merlin Sheldrake? It treats fungi as a biological information-processing network, which fits perfectly in the center of your Venn diagram."  

Hope you find this a useful method to let you manage your physical inventory of books. 

Question for you:  I have to admit I've only done this with 7 different photos of my shelves and stacks. I'll be curious if you try it with 20 or more images.  Will Gemini track them all?  Will it be as useful?  Let us know in the comments! 

Also let us know if you ask any interestingly different (and revealing) questions about your bookshelf! 



SearchResearch Lessons


1. Taking pictures of your bookshelves can be incredibly useful for locating otherwise lost items.  I have to admit that I did this initially out of desperation.  I'd lost a book I knew I had (the aforementioned Roger Tory Peterson "Field Guide")... and was able to find it.  

2. Keep your book spines visible.  I later noticed that a couple of my books don't appear in this list because they were occluded by pieces of paper drooping down from above.  Finally, a real rationale for keeping your spines visible!  


Keep searching. 

Monday, November 24, 2025

Answer: How good is AI at recognizing images? What should you know?

Search by image is powerful... 

Remarkable desserts. What are they? 

.... but you need to know what it can do (reliably) and what it can't do (unreliably).  


Let's talk about what AI powered image search is capable of doing.  Here are the questions from last week:    

1. The image above (the dessert display) is from a cafe.  Can you figure out what KIND of desserts these are?  Yes, I know you can read the labels, but these are from a particular region of the world.  What kind of cafe is it?  (Image link to full image.)

The obvious thing is to do a Search-By-Image (which we last discussed in January, when searching for the  El Jebel Shrine, aka the Sherman Event Center in Denver.  That was just 11 months ago, but the world has shifted since then.  

We can download the image (with the link above) and do an image search (no longer called "reverse image search" since the function no longer does "reverse" image search, but tries to do an analysis of the image).  You'll get this: 


This is nice, but it's NOT a "reverse image search" in the way we used to think of it.  

To get that function, I'd use Bing image search, which gives you a result like this: 


In this case, there's no exact match for the image, but there are a lot of similar Middle Eastern restaurants and cafes full of yummy pastries.

On the other hand, the Google answer is interesting.  There's a good description of the contents of the pastry case, but over on the right side in the right-hand side panel you'll see a suggested "possibly relevant" link to Sana'a Cafe in Oakland.  

It's a bit of a spoiler, but this IS an image of the pastry case at Sana'a Cafe in Oakland, California!  The big question for us: How does it know?  This is definitely NOT the closest Middle Eastern cafe to my house (which is where I'm writing from).  

I checked to see if it was using the GPS location stored in the photo. 

(Remember that you can pull the lat/long of the image?  Previous SRS discussion about EXIF and the metadata attached to your images.)  

To check, I edited the image metadata to alter the lat/long and re-ran the query--and got the same answer!  

So what IS going on?  Answer: this image has a close-match to an image found in Reddit about the Sana'a Cafe in Oakland!  

Notice that you can get to the "similar images" section  by simply scrolling down the page to get to "Visual matches," where 3 of the top 5 visually similar images are from Sana'a Cafe.    (Note that these images are really similar to the way Image Search used to work--it would show you the nearest matches.)  




That's great--at least we now know how to get the old search behavior to function.  

Back on the first AI-augmented search page, you probably noticed that there's an option to "Show more."  Clicking on this button will give you a more detailed analysis of the image.  It looks like this: 

So.. yeah. Not a lot of help here--this is just a repeat of what we saw in the first frame.  But what happens if you click the "Dive deeper in AI Mode" button? 


Ooops. Now Image Search is going off the rails.  How does Google know that it's the Levant dessert cafe and bakery?  Completely unclear.  And no amount of asking it would give me any useful chain of reasoning.  

Rather than using plain Google Image Search, I thought I'd give Gemini a chance.  One MIGHT hope that the answers would be the same (it's the same company, right?). So I uploaded the image to Gemini and asked it to describe the image.  No surprise, it gave me more-or-less the same answer.  

But when I asked Gemini a follow-up questions [where is this dessert case located] the Google train goes off the rails and into the river where it crashes and burns.  

This is the equally incorrect response, although incorrect with a florid explanation that's completely wrong: 



As much as I admire the idea of reading the reflected text of the logo (which reminds me of what we did in SRS 2012 ("Where are you?"), in this case, it's totally wrong!  I can't see the "Kunafas" anywhere in the image (can you?).  

So I asked Gemini where the "Kunafas" came from.  Here's what I got when I asked: 



Seems good, right?  But let's look at the highlighted region carefully, shall we?  Here, I put the original image and the Gemini-created image side-by-side.  


As you can see, the "reflected letters" are clearly--at least to you and me--the letters of the cafe's name, Sana'a.  The "F A N U K" are all hallucinated.  

Even more bizarrely, I was curious and re-did the original query on regular Google Image Search, using the same image as before and asked Google Image search to describe the image.  This time, it suggested that the place might be the Sana'a Cafe... but again, not reasoning about why.  I assume it's using the "related images" feature and extracting the name from the Reddit thread images.  This is bizarre because it's NOT the same answer from earlier!  


Bottom line: You absolutely have to check everything that Image Search tells you.  Don't just accept it as truth--it could be very far from the truth.  

2. Here's a photo I took while on a walk in San Francisco the other day.  What a strange, strange place!  It's clearly supposed to have a statue on top of the pedestal.  What happened here?  Why is it bereft?  (Image link)  


I did the same process as before:  regular Image Search on Google and get this as an answer: 



The AI overview is completely wrong.  This is NOT at Lands End park at all... everything in this result is wrong.  

On the other hand, the "Visual matches" section actually gives good results.  This IS "Mount Olympus" (the San Francisco version).  

So, let's try again with the fancy Gemini-powered AI image identification process.  What do we get here? 


The first answer ("...likely the Stairs to Mount Olympus Park in San Francisco..") IS correct, while the "another possibility is the One Thousand Steps Beach Access in Santa Barbara" is quite wrong.  

As before, if you ask Gemini directly (by uploading the picture and asking "where is this image"), you get another kind of wrong answer: 


At least it got the trees right (they are Monterey Cypress), but everything else is seriously wrong.  

First off, there IS NO Hilltop Monument at The Sea Ranch.  (I've been there quite a bit, and I'm 99.9% sure such a place doesn't exist.)  Google might mean the Sea Ranch Chapel, but it's not called the Hilltop Monument, and it's not on a hilltop in any case--it's in the flatlands.  

I thought maybe I'd give ChatGPT a chance, but that didn't work either: 


Again with the Lands End?  The only connection is the Lands End also has a lot of Monterey Cypress, but there's no other connection here.  And there IS a monument to the USS San Francisco at Lands End, but again, it has nothing to do with this picture.  Hallucinations abound.  

And, once again, the "Visual Matches" section of the SERP gives you a much better result than the AI parts of the result: 




But you, dear Human, can easily pull the GPS lat/long from the EXIF metadata to find this in Google Maps: 



And then, a regular Google search [ Mount Olympus Park San Francisco ] will teach you that Mount Olympus was a park in more-or-less the center of San Francisco, with a pedestal, atop which stood a dramatic statue, "The Triumph of Light." Mysteriously, the statue (made of bronze and weighing probably 500 pounds) vanished from the pedestal years later and has never been found.  (See the backstory here at FoundSF.org

The statue that was there: 

Mount Olympus in SF, with the original statue that mysteriously disappeared sometime after 1955. P/C San Francisco History Center, San Francisco Public Library, via OpenSFHistory.org 

And nobody knows where--or even exactly when--the statue disappeared.  The city took it's collective eye off the ball and it just kind of went-away one day in the mid-1950s.  


Bottom line: Don't trust the AI analysis.  Do the research yourself. 



3. Here's a great picture of a cloud that Regular Reader Ramon sent in for identification.  What's going on here?  (Image link)  

P/C SRS Regular Reader Ramon

A regular Google Image search tells us that this is a fallstreak hole, also known as a "hole punch cloud."  




As you'd expect, I checked this out by doing other searches (e.g., for [fallstreak cloud]) and looking at the collection of remarkable and beautiful photos.  In this search, the AI result and "Visual matches" images are all pretty good.  

And now we know that the fallstreak cloud is caused by supercooled water in the clouds suddenly evaporating or freezing, possibly triggered by passing aircraft passing through the cloud and causing a chain reaction. Such clouds aren't unique to any one geographic area and have been seen in many places.  

Bottom line:  This worked quite well--not a huge surprise as the image is very visually distinct and there are literally thousands of posts with images describing what this is.  


4. This little bridge is in a lovely town somewhere in the world.  Can you figure out where it is, and when it was built?  (Image link)



This is a case when image search works quite well.  Luckily, this is a famous bridge with LOTS of photos taken over the years.  

Yes, it's the Pinard Bridge, located in Semur-en-Auxois. It (and much of the town) date to the 12th century.  But it's really hard to determine when it was first built.  It will probably take some time searching in old French histories to figure out the original date. But since it's in the river valley that historically floods, it's been rebuilt many times.  


Regular Reader Arthur Weiss points out that the city's website of Semur-en-Auxois
 tells us that  "The Pinard bridge, or Pignard on the Belleforest view, provided access to the Pertuisot mountain pasture. It was destroyed or extensively damaged on several occasions by floods, including those of 1613, 1720, 1765 and 1856."

(I also found this website with the search [ville-semur-en-auxois pont pinard] -- this is one of those cases when searching in the local language really helps.)  

So while the date of first construction was probably in the 12th or 13th century, it's been rebuilt so many times that little of the original bridge is now left in place.  It is, as we would say today, and example of the Ship of Theseus (if Theseus' ship is replaced plank by plank over a long time until all pieces of wood have be replaced by newer wood, is it the same ship?).  


Search Research Lessons

1. Be very, very cautious about AI generated results.  As we saw, the results can be very, very wrong. My advice: Try the AI methods, but double-check everything.  You cannot trust that the answer is correct.  

2. Note that "Visual Matches" section of Image search (often below the fold) has the "old style" most similar images from the web.  That section also often has great clues to the actual thing you seek.  Be sure to check that part of the search results as well.  


Keep searching! 











Thursday, November 13, 2025

SearchResearch (11/13/25): How good is AI at recognizing images? What should you know?

 Recognizing images is an impressive AI feat.  But... 

Remarkable desserts. What are they? 

.... it's true that the state of the art of image recognition has changed over the past several years.  It gets better, it gets worse, the functionality changes, some things are removed, others are added.  

But it's still an amazing thing... IF you know what works and what doesn't now.  I'm afraid that means you have to stay up on what's going on in the world of image search.  So let's dive into it... 

Here are a few images that I'd like for you to identify--the key question for each is what's going on in this image?  What is it?  (And if you can, where is it?)  

For each image I've given you a link to the FULL image (no sneaky reduction in resolution or removal of metadata, as our blogging tools tend to do).  I recommend you use that image for your search.  

1. The image above (the dessert display) is from a cafe.  Can you figure out what KIND of desserts these are?  Yes, I know you can read the labels, but these are from a particular region of the world.  What kind of cafe is it?  (Image link to full image.)


2. Here's a photo I took while on a walk in San Francisco the other day.  What a strange, strange place!  It's clearly supposed to have a statue on top of the pedestal.  What happened here?  Why is it bereft?  (Image link)  



3. Here's a great picture of a cloud that Regular Reader Ramon sent in for identification.  What's going on here?  (Image link)  

P/C SRS Regular Reader Ramon


4. This little bridge is in a lovely town somewhere in the world.  Can you figure out where it is, and when it was built?  (Image link)



The point of this week's Challenge is to give you a bit of familiarity with the different image reco tools.  They're sometimes called "Reverse image search" tools, but as you'll find out, they have very, very different capabilities.  

When you write in to let us know what you found, be sure to (a) tell us what tools you tried, (b) if they worked well, and (c) whether or not you find the answer believable. 

Next week I'll write up my findings and summarize what everyone else found... along with a description of the tradeoffs involved in the different tools.  

Keep searching! 


Friday, November 7, 2025

SearchResearch (11/5/25): Pro tips on using AI for deep research

A few friends... 

Gemini's conception of [hyperrealistic image of scholar doing deep research].
Not sure it's hyperrealistic, but definitely interesting. 

... have recently written posts of their own about using AI for deep research. Since they've got some great nuggets, I'm going to leverage their writings and give a quick summary of the top methods for doing high quality deep research with LLMs. 

In this post, I'm drawing extensively on a post written by Maryam Maleki (UX Researcher at Microsoft) for people doing product research:  How to Do High-Quality AI Deep Research for Product Development  Here, I've generalized it a bit and given it my own flavor.

Here are the top few tips about getting Deep Research mode to work well for you:  


Be clear about what you want.  

Keep in mind: You want credible content. Prompt it that way. 

In order for the AI to work, you need to tell it what kind of sources you think are reliable and credible.  If you can, give it a list of several resources as guidance. 

In these patterns below, items in { } and italics are variables.  You need to pop in the values you need to get the effect you want.  

Pattern:   

[ Do deep research on {TOPIC}. Generate {n} credible sources with links that can be used for this research.

Prioritize: {BOOKS / ACADEMIC PAPERS / CASE STUDIES}

For each source, provide: the Title, the URL, a short snippet about why it's relevant, tell me the Source type. ] 

Example: 

[ Do deep research on Rocky Mountain locusts. Generate 10 credible sources with links that can be used for this research.

Prioritize: academic papers

For each source, provide: the Title, the URL, a short snippet about why it's relevant ] 

Doing this in Gemini will create a 4,000 word essay about Rocky Mountain Locusts.  It will ALSO give you section VII, which has Ten Credible Sources for Rocky Mountain Locust Research.  It also creates a reference list for the entire document, with section VII containing the best of the entire list.  

By contrast, doing this in ChatGPT 5/Thinking or Claude Sonnet 4.5 gives you exactly what you asked for--they give you the list-of-ten.   

Review the AI-generated results for quality 

I note in passing that the Gemini-created document is pretty good, but the list of 10 papers was a little mixed in quality.  (One paper was very tangential, one paper was just a link to Wikipedia, and one paper wasn't accessible at all.)  I clicked through all of the links to verify that they were real and on-target.  

If the results aren't what you want, feel free to iterate until you get the result quality you need.  

Press enter or click to view image in full


Ask for contrary points of view
(don't just confirm!)
  

Research isn’t just about collecting references — it’s also about understanding the space, both in terms of what you know and what counterarguments you might want to consider. 

In reading through the Rocky Mountain Locust collection, you'll notice that one of the main hypotheses about the disappearance of the locust is that the rangeland where it lived and bred was increasingly plowed up for farmland.  

You should ask about other opinions:  

Pattern:   

[ GIve me different explanations for {TOPIC}.  Are there other points of view that have been considered in the literature?  

For each source, provide: the Title, the URL, a short snippet about why it's relevant. ] 

Example:

[ Give me different explanations for why the Rocky Mountain Locusts disappeared.  Are there other points of view that have been considered in the literature?  For each source, provide: the Title, the URL, a short snippet about why it's relevant. ] 


Interestingly, Gemini merely did an okay job of this step--ChatGPT was reasonably good, but Claude did a spectacular job of highlighting 11 different hypotheses about what happened.  (To see Claude's output, here's the document.)  This also suggests that you should get multiple AI opinions to improve the quality of your research!   


Double Check Everything

We still live in a hallucinatory world. As great as AI generated content is, I still double check everything.  In her post, Maryam has a great set of questions (below).  This is what is on my mind as I read through EVERY claim and EVERY linked document.  You should too.  

  • Source Quality — Is it recent, reputable, and methodologically sound?
  • Fact Containment — Only use approved notes/sources. 
  • Triangulation — Every claim needs at least two independent sources.
  • Original-Source Tracing — Don’t rely on LinkedIn slides, Twitter posts, or a quote in a blog. Find the earliest credible publication.
  • Hallucination Sweep — Audit the final draft. Remove or qualify any claim not directly supported.



Search Research Summary

When using AI for deep research, keep in mind 3 heuristics: 

1. Be clear about what you want.  Not just in content, but in form and quality.  Be explicit--give examples--ask for everything you want. 

2. Review the results for quality. Do this step immediately, and change the prompt if need be to get what you really seek.  Iterate!  

3. Ask for contrary points of view.  Don't give in to confirmation bias--proactively ask about other perspectives on the questions you're researching. 

4. Double check everything.  No surprise here, but be sure to leave enough time to do this.  Don't just copy/paste what you've found. 


 
Thanks again to Maryam for her excellent post 

Keep searching!