Friday, April 3, 2026

SearchResearch (3/3/26): 4 key ideas to keep in mind when doing research with an AI

Finding the right grain size is important...  

Mechanical sieves filter out grains at the correct sizes.


... especially when you're trying to figure out how to write a prompt to answer your research question. Here are four aspects of crafting a well-working prompt.  


1. You have to scope the research question:  

When partnering with an AI for complex research tasks, the success of your inquiry often hinges on how you scope the problem. We hear a lot about the mechanics of "prompt engineering," but a far more vital skill is "abstraction engineering"—calibrating the exact altitude at which to fly your research question (RQ). Working effectively with a generative model requires finding the "Goldilocks zone" of detail: not too broad, not too narrow, but perfectly contextualized.

If you frame your task at too high a level of abstraction, the AI will hand you back a beautifully structured plate of platitudes. You’ll get the generic encyclopedia summary when you actually need a nuanced, critical analysis. Conversely, if you zoom in too far—dictating rigid micro-steps or demanding highly specific, obscure data points right out of the gate—you back the AI into a corner. When treated like a traditional relational database or forced to retrieve hyper-specific, unindexed numbers, the model is highly prone to hallucinating or failing outright.

The sweet spot lies in defining the intent and the boundaries of your research without over-constraining the AI's ability to synthesize. You want to give it enough conceptual context to act as an intelligent thought partner, while setting clear parameters to keep it anchored to verifiable reality.

Let’s look at a concrete example. Imagine you’re investigating the historical impact of extreme weather on California's coastline.

  • Too Abstract:  A prompt like: [Tell me about coastal erosion in California] is a bit too open-ended. The AI will generate a high-level, generic overview summarizing basic geological concepts and mentioning climate change. It’s structurally sound, maybe even more-or-less correct, but practically useless for serious research.

  • Too Granular: Consider this prompt by contrast: [What was the exact volume of sand, in cubic yards, lost from the southern end of Half Moon Bay, California between November 12 and November 15, 1983?] Here, you’re asking a generative text model to act as a raw data repository and data analyst. It will likely confidently invent a plausible-sounding number, immediately leading your research astray. This is a great way to generate junk quickly.

  • The Right Level of Abstraction: A much better prompt: [As a coastal geologist, I am researching the impact of the 1982-1983 El Niño storms on coastal erosion in Northern California. Can you synthesize the major geological impacts on the coastline during that specific winter, and then suggest which state agencies, archives, or specific scientific databases I should query to find the raw historical wave-height and sand-loss data for Half Moon Bay?]  

This final approach is scoped perfectly. It leverages the AI for what it does best—synthesizing complex historical events and mapping the conceptual landscape—while strategically recognizing its limitations. By asking the AI to point you toward the right primary sources rather than demanding it be the primary source, you are utilizing it as an expert research librarian. This accelerates your workflow without compromising the integrity of your methodology.


2. Communicate your intent clearly enough for reliable hand-off to an AI.  


We often treat AI like a mind reader. It is not. It’s more like a wildly enthusiastic, highly literal intern who just drank six espressos and wants to please you immediately. The critical moment in any AI-assisted research task isn't the underlying algorithm; it's the point where you transfer your beautifully complex, nuanced research goal from your brain into a text box. If you don't communicate your intent clearly, the AI will happily sprint off in the wrong direction and return milliseconds later with a pile of beautifully formatted, profoundly unhelpful text.

To successfully hand off a task, you have to explain the why alongside the what and maybe add a dash of context about who you are and what you expect. 

The AI lacks the implicit context of your day-to-day life. It doesn't know you’ve spent three weeks agonizing over a methodology, nor does it know if you are writing a rigorous literature review or simply trying to settle a bar bet.

Here’s a remarkably common failure mode. Suppose you’re researching the history of urban sanitation (a riveting topic, I know) and you have a stack of primary source documents about 19th-century London. Here’s how to approach it:  

  • Too Abstract: If your hand-off prompt is simply, [Summarize these papers], the AI will cheerfully oblige. You’ll get a perfectly bland, high-school-level essay about cholera, bad smells, and the River Thames. It’s historically accurate, but completely useless for actual research.

  • Too Granular:  You might overcorrect and try: [Extract every mention of 'sewer pipe diameter' from these texts.] Now you have a sterile list of numbers completely divorced from their historical context. Also useless.

A reliable hand-off requires stating your overarching intent so the AI knows exactly what kind of intellectual heavy lifting it needs to do.

  • The Right Level of Abstraction: A better approach looks like this: [I am writing an academic paper comparing the municipal funding models of 19th-century London and Paris. I am specifically interested in how they paid for public works. Read these papers on London's sanitation system and extract only the sections detailing the financial instruments, bond issuances, or tax levies used to fund the Bazalgette sewer network. Explicitly ignore the medical history of cholera outbreaks.]

Notice the difference? You’ve given the AI a job description, a specific destination, and a "do not enter" sign for the irrelevant stuff. By explicitly defining your intent—what you are doing and why you are doing it—you constrain the model's infinite possibilities into a highly targeted research instrument. Treat the AI like a brilliant but amnesiac colleague who just walked into the middle of your project meeting. Tell them exactly what the end goal is before you put them to work, and you might actually get a useful result.


3. Evaluate the results you get back from your AI 


If there is one universal law of generative AI, it is this: it will confidently hand you a fabricated answer with the serene, unshakeable certainty of a mediocre undergraduate who just skimmed a Wikipedia summary or handed in an AI generated output without reading. 

This becomes a massive problem in serious research because, out here in the real world, "ground truth" isn't a magical, pristine spreadsheet handed down by a benevolent universe. Real data is often incomplete, contradictory, or buried under a mountain of historical noise.

So, how do you trust an AI assistant when you don't have the perfect answer key to check its work? 

You have to build-in your evaluation strategy, thinking about it before you hit "submit," shifting your focus from verifying a single final answer to stress-testing the model's methodology.

Let’s say you are trying to piece together the economic history of a regional industry—for instance, the apple export market in 1920s Washington state. The historical records are a disaster. Farm manifests were lost in fires, different counties used different metrics (bushels versus crates versus trainloads), and local agricultural boards routinely exaggerated their yields to look good. The ground truth is inherently noisy.

  • Too Abstract: Here is what the "too abstract" version of our 1920s Washington apple research looks like:

[Summarize the state of the Washington apple export economy in the 1920s and tell me how successful it was.]

When faced with a prompt this broad, the AI will happily oblige by synthesizing a beautifully written, highly readable narrative. It will tell you about the booming agricultural sector, the arrival of new rail lines, and the indomitable spirit of the Pacific Northwest farmer. It might even throw in a generic quote about the crispness of a Red Delicious. It will sound incredibly authoritative, like a velvet-voiced narrator on a PBS documentary. 

And it will be completely useless to you as a researcher.

Why? Because at this level of abstraction, the AI actively hides the noisy ground truth from you. Instead of dealing with the messy reality that Chelan County measured their yield in "crates" while Yakima County measured in "freight cars" and half the records burned down in 1926, the model simply smooths all that chaotic data into a neat, frictionless trendline.

And asking for a judgement call (“tell me how successful it was”) is a beginner’s error.  The AI doesn’t have a point-of-view, but it’ll make one up.  

All of this papers over the contradictions and missing farm manifests with plausible-sounding historical clichés. Because you asked a vague question, you get a generalized synthesis, leaving you with absolutely no way to evaluate the accuracy of its claims. You can't audit the model's work because the AI has abstracted away all the actual evidence. You asked for a rigorous economic history, and it handed you a tourism brochure.

  • Too Granular: If you approach this at the wrong level of detail and ask the AI, [What was the exact total tonnage of apples exported from Washington in 1924?] the model will gladly average out the historical lies, hallucinate a plausible-sounding integer, and present it as absolute fact. Because the underlying data is a mess, you have no way to evaluate whether the AI's number is a brilliant synthesis or a total fabrication. You are trapped in a dead end of misplaced trust.

  • The Right Level of Abstraction: A better approach—scoping the task to account for that noisy reality—looks like this: [I am researching 1920s apple exports in Washington state, but the historical county records are contradictory and use mixed units. Here is a text dump of five different agricultural reports from 1924. Please extract the export claims from each, standardize the units into tons where possible, and explicitly flag any mathematical discrepancies—for example, if a county claims to have exported more apples than they had arable acreage to grow them. Do not attempt to give me one final definitive number; just map the contradictions. Please include all references to source materials.]

Notice the shift. You haven't asked the AI to find the "truth," because the truth is currently unknowable. (This means that prompts like “tell me just the facts” are fundamentally hopeless.) Instead, you've asked it to structure the results with a little fact-checking. With this approach, you can actually evaluate the AI's output reliably: did it catch the logical discrepancy between acreage and yield? Did it convert the units correctly? By adjusting the level of your RQ, you transform the AI from a highly suspect oracle into a tireless research assistant helping you audit a messy reality.

Also, asking for the references is an important step.  (Be sure to check that they’re real!)  


4. Plan to iterate on your prompt.   

Just as in the old days of Google search (meaning, last year), there is a persistent, romantic myth in the world of generative AI that the "perfect prompt" exists. We want to believe that if we just arrange our words with the exact right term choice and a bit of alchemy, the AI will do the magic and hand down a flawless, publication-ready analysis on the first try.

Let me disabuse you of that notion right now: your first prompt is almost always going to be wrong or just slightly misaligned with the AI. And that’s entirely okay.

Working with AI isn’t a vending machine transaction; it’s a conversation. Plan to iterate. 

It is exceedingly rare that your initial text string will capture the full, nuanced intent of your research goal. You are going to find subtle misinterpretations, bizarre blind spots, and moments where the model took your slightly ambiguous phrasing and ran off in the wrong direction. 

The real skill is treating that first output not as a final answer, but as a diagnostic tool to figure out how to calibrate the exact level of detail your RQ actually requires.

Here’s how this iterative process helps you find the right level of abstraction for your RQ. Suppose you’re researching the engineering failures of the 1858 transatlantic telegraph cable.

  • Too Abstract:  [Why did the 1858 telegraph cable fail?]  The AI gives you a decent, if sleepy, overview about a guy named Wildman Whitehouse applying too much voltage. It’s too abstract and open-ended. But reading it, you realize you actually want to know about the physical degradation of the cable itself. (And it IS a great way to come up with topics for additional deep-dive research.) 

  • Too Granular:  [What was the exact chemical breakdown rate of the gutta-percha insulation on the 1858 cable on August 15th?] Now you’ve zoomed in too far. The AI panics at the hyper-granularity, either hallucinating a fake chemical decomposition story or apologizing that it doesn't have the daily logs. But now you know your boundaries.

  • The Right Level of Abstraction: Here’s a much better, more appropriate level of detail for your prompt:  [For a paper on the history of telegraphy to be submitted to the local newspaper I am researching the physical degradation of the 1858 transatlantic cable. My previous searches indicated that high voltage destroyed the gutta-percha insulation. Can you synthesize the historical consensus on how the seawater interacted with the compromised insulation to cause the final short circuit? After your summary, please list three historical archives or electrical engineering journals where I could find primary source correspondence about the cable's physical testing]

By iterating, you’ve discovered the sweet spot. You provided the target audience (local newspaper),  the context (historical consensus on insulation failure), asked for a synthesis appropriate for a language model, and then directed it to help you find the primary sources for the hyper-granular data. You didn't fail on your first prompt; you simply used it to map the territory so you could finally ask the right question.


Keep searching!



Friday, March 27, 2026

Answer: Who designed this stained glass?

 This should have been easy... 


... but it wasn't.  If you've been around the SearchResearch Rancho for a while, your first instinct would have been to just use Google Lens to search for the image.  That's what I did.  

But... as usual, there's more to the story... 

Here were our presenting questions for the week:  

1. Where is this stained glass? 

2. Who designed it? 

Let's tackle both questions at once. 

IF you use Google Lens with a right-click (then "Search this image with Google Lens,") you might get a result like this, telling us that it's in the Church of St. Mary, Slough, in England.  

Google Lens search on window; first result is completely wrong. 

Nice. HOWEVER... If you click through to that image of stained glass in the Church of St. Mary's, you see that it's NOT a match. 

If, on the other hand, you click the "AI Mode" button on the image search panel (shown in bold in the image below), you get a different answer.  Here, the result shows that it's the window "Land is Bright" in the Washington National Cathedral, designed by John Piper.   


That's an interesting answer, but wrong.  If you do a regular Image search for ["Land is bright" Washington National Cathedral stained glass] you see this image, which clearly is NOT our target.  Oops.  

"Land is bright" window at the National Cathedral, DC.
P/C Wikimedia

It's a beautiful window, but as I always say, CHECK YOUR ANSWERS!  This one is clearly wrong.  

On the other hand, another thing I always say is that "incorrect answers can sometimes give you a clue..."  

In the very first Google Lens result, the second image points to a window at the Washington DC National Cathedral.  If you click on that result, it takes you to a page about Pentecost in 2022, but with an image of our target window.  This is a big hint: We're getting closer!  Stay on the trail!

Even though this isn't the final result, it does suggest we should check the windows at the National Cathedral.  A quick search for [stained glass windows of the National Cathedral] takes us to the Wikipedia Category for this topic.  A Category page is a collection of all the windows at the cathedral.  Simply paging through the collection takes you rapidly to this page:  "Founding of a New Nation" which tells us that this the answer we seek. 

Answer:  This is a "a stained glass clerestory window above the George Washington Bay in the south nave of the Washington National Cathedral. It was designed by Robert Pinart and fabricated by Dieter Goldkuhle, and dedicated in 1976."

But, since we ALWAYS check (right?), I went to the National Cathedral home site and found a nice video about the windows, "January 25 2022 Docent Spotlight: Sacred Stories in Light & Color."  At 26:11 you'll find this slide that confirms our finding and tells us more about the window: 

From YouTube video "Sacred Stories in Light & Color"

I tried all of the obvious AIs--none of them got it right.  Bing didn't get it, and the clue to the right answer was fairly hidden in the Google results.  This is genuinely a hard search task.  

SearchResearch Lessons 

1. Keep searching.  I had a strong suspicion that someone would have documented this kind of thing.  There are books on this topic, and if the online searches hadn't worked, I would have gotten one of them via interlibrary loan.  In this case, I found a good result AND a high-quality confirmation from the Cathedral's own site.  

2. Verify everything!  Even though we got lots of positive-sounding answers, if you check carefully, you'll see that the confident answer was, in fact, not correct. 

3. Follow those other trails.  In this case, the second result actually led us to the correct answer.  Don't give up on seemingly incorrect results... you might find something useful on the trail to your answer.  



Keep searching.  

Wednesday, March 18, 2026

SearchResearch Challenge (3/18/26): Who designed this stained glass?

 I was out for a walk yesterday... 


... and took this photo.  As you know, I love stained glass, and this was an especially beautiful example. I took the shot and then realized that it might make a great SRS Challenge.  

I know where I was when I took the photo, and I know who designed it... but can YOU figure this out? 

1. Where is this stained glass? 

2. Who designed it? 

I tried a couple of AI tricks, that failed.  Can you figure it out? 

Let us know what you did!  (And also tell us about any methods you tried that did NOT work out!  

Keep searching.  

Wednesday, March 11, 2026

SearchResearch (3/4/26): How to do long term research with an AI partner

The Art of Long-Term AI Triangulation

Surveyor triangulating on a construction site (1920s) P/C USC and California Historical Society

In the previous post, we looked at the reality of modern search, recognizing that the world now is very different that it was 5 years ago. 

With the explosion of multimodal inputs and AI-driven queries, we’ve traded the quiet librarian searcher for the role of navigators in a high-speed, synthetic storm.

We also confronted a dangerous paradox. At the exact moment search is becoming infinitely richer and more complex, users are demanding a "snackable," frictionless experience. We live in a world where it is now much cheaper for an AI to generate a plausible hypothesis than it is for us to wade through rigorous evidence to verify it. 

To combat this, I mentioned the necessity of friction—using a method like Constraint-Based Fact-Checking to set intellectual traps, force the AI out of its lazy defaults, and avoid the "average" of the internet.

But there is a catch.

Constraint-based prompting is a good survival tactic for a single search session. But what happens when your research spans weeks or months? This is the world I live in: my research often isn’t done in one day, but takes weeks to search, accumulate evidence, and understand what I’m trying to do. 

In fields where nuance is everything, an adversarial prompt that sparks brilliant friction on Day 1 can slowly degrade into an intellectual echo chamber by Day 30. If you are using AI to synthesize hundreds of documents over a long-term project, relying on one-off Q&A tricks leaves you highly vulnerable to compounding hallucinations.

Knowing how to search is the primary way we exercise our agency, and for serious researchers, that means evolving past the single prompt. We have to move from one-off trap-setting to a continuous, iterative methodology.

What we need is a way to do Long-Term Triangulation by treating the AI as a partner in the research. 

If you want to ensure that as the machines get smarter, we don't get lazier, you have to design an environment that treats the AI not as an answering machine, but as a sustained intellectual sparring partner. 

Here is a step-by-step breakdown of how a researcher can build and maintain this longitudinal friction over a sustained period of research.

Here are the four steps you can use to support your long-term research projects with AI-augmented search and analysis tools.  Let’s call these the Four Pillars of Long-Term AI Triangulation.


1. Build and use a Persistent Memory 

You cannot have a long-term sparring partner if the AI forgets everything every time you close the tab. The foundation of this method is establishing a persistent context window.

The Action: Instead of starting new chats every time, use long-context workspaces (like Gemini Advanced, NotebookLM, or custom project threads) that hold the entire history of the project.

The Routine: At the end of every research sprint (say, at the end of your research day), create a "State of the Thesis" summary within that workspace. (Save this summary—you’ll need it later.)  

The Prompt: [Synthesize our current working hypothesis based on the last 24 hours of inputs. List the three strongest pieces of evidence we have, and identify the single weakest link in our current logic.]


2. Track the Shifts

When dealing with complex topics, the danger isn't just hallucination; it's the subtle shifting of goalposts. As you feed the AI more data, it will naturally try to smooth out the narrative to keep it "snackable." You need to learn to track the deltas—the differences between last week's consensus and this week's. Things change, and that’s okay, but plan to track that.  Use these changes for better triangulation.

The Action: Create a "Friction Log." Whenever new, messy primary sources are introduced, do not simply ask the AI to summarize them. Ask it to compare the new information to its own previous conclusions.

The Routine: The weekly reconciliation.

The Prompt: [I am uploading three new peer-reviewed papers and my previous “State of the Thesis.”  Do not just summarize the new papers. Compare their findings against the “State of the Thesis”. Highlight every specific point where this new data contradicts our previous assumptions. Force a reconciliation.]

And then, naturally, include the shifts in your weekly “State of the Thesis.”  


3. Active Critiquing

An intellectual sparring partner must be allowed to throw punches. Be cautious: If you only reuse data that confirms the biases, the AI will happily build an echo chamber. Triangulation requires intentionally breaking the model's consensus.

The Action: Dedicate 20% of your daily research time to actively hunting for contradictory, fringe, or highly niche data that challenges the dominant narrative of your research, and force the AI to grapple with it.

The Routine: A "Red Team" injection.

The Prompt: [We have spent three weeks building a case file for <Concept X>. I want you to act as a hostile, highly skeptical peer reviewer. Imagine a critique from a dissenting academic. Try to critically break down the current thesis. Where is our argument most likely to fail peer review?] 


4. The Meta-Audit (Check your blind spots)

Eventually, your research process will settle into a rhythm, and that rhythm can create blind spots. The final step in long-term triangulation is stepping back to audit the process of the research, rather than just the facts.

The Action: Periodically ask the AI to evaluate the shape of the data it has been fed, looking for structural biases in the researcher's own search behavior.

The Routine: Do a monthly audit looking for gaps.

The Prompt: [Analyze the <N> sources we have processed in this thread over the last month. What academic disciplines, geographic regions, or ideological perspectives are entirely missing from our dataset? What search queries should I be running today to cover those blind spots?]


By structuring your workflow this way, you come to realize that real research in the AI era isn't about getting the machine to write the final paper or asking the cleverest prompt--it’s about building a system of continuous, productive friction that allows both the human and the machine to think harder.

Keep searching.  Keep the friction.  


 

Friday, March 6, 2026

SearchResearch (3/6/26): Why you STILL need to know how to search... perhaps more than ever.

 It's been an interesting few weeks.  


Surprise! I found my stone twin hiding in an architectural sculpture at Yale. 

I was there in February to give a lecture on Human-Centered AI, during one of the colder Februaries on record, with temperatures dropping to -5°F (-15°C). Fortunately, after years of living in upstate New York, I own the right kind of clothing.

But, as Regular SRS Readers know, I’ve also been writing about the changes underway at the intersection of AI and search. And there are a few things we need to keep in mind...


Search behavior is evolving rapidly, with several key changes:

Increased Complexity: Queries are becoming longer and more complex, with a significant rise in conversational and long-tail queries. Not a surprise, really, but it suggests that more people are shifting to an AI model of search (and are tackling more complex search tasks). 

Visual Searches: Visual searches have grown significantly, with a 65% year-over-year increase, with more than 100 billion visual searches already this year (2026). 

Multimodal Searches: Users are embracing new ways to search, including voice, text, circle, scribble, and humming.  What's interesting is that they're combining different inputs like images and text. You know video isn't far behind. 

AI-Driven Searches:
AI is driving an explosion of complex and long-tail (i.e. rare) searches, with AI mode users asking much longer questions, sometimes as much as 2-3 times the length of traditional searches.

We're really not in the Kansas of Search any longer.  


2.  All of the metrics point to one thing: people want visual, browsable, and snackable search experiences:

That's fine, but we also live in a world where it is now 1,000x cheaper to generate a plausible hypothesis than it is to verify it by looking-up and wading through through rigorous evidence. A desire for more "snackable" presentation of results isn't going to encourage deeper research and careful analysis.  

In fields like sociology or history, where nuance is everything, AI 'hallucinations' are becoming more sophisticated and harder to spot.

Research is shifting from a labor-intensive process to a judgment-intensive one.

The advent of 'Deep Research' agents means we can now summarize 500 papers in 5 minutes. This doesn't make research easier; it makes it harder. It moves the bottleneck from information gathering to critical evaluation.  How do we build trust in the 'market of ideas' when the primary tool for research is also the primary generator of high-quality misinformation, much of which is inherently snack-food for the mind?


3. Intellectual Vertigo: 

We are currently living through a moment of collective intellectual vertigo. For the last twenty years, search was an act of retrieval. We were librarians looking for a book on a shelf. Today, search has become an act of synthesis. We ask a question, and a machine doesn't just find the book; it reads ten books, summarizes them, and hands us a neat, three-paragraph answer, as if from a vertiginous height.  

The AI magic is so good that it creates a dangerous illusion: that the labor of research has been eliminated. But in reality, the labor hasn't disappeared—it has shifted.

In the age of AI, the quality of your answer is strictly capped by the quality of your query. If you ask a "lazy" question—"What is the consensus on climate migration?"—the AI will give you the most probable, middle-of-the-road, and often outdated "average" of the internet. As Ted Chiang brilliantly wrote, AI answers are often like a blurry JPEG of the internet.  Be Careful.  

Knowing how to search now means knowing how to probe the AI model to bypass that "average." 

It means knowing how to construct searches that force the AI to look at the edges of a field, to find the dissenters, and to cite the specific data that doesn't fit the neat narrative. If you don't know how to search, you are essentially letting an algorithm decide the boundaries of your world.

AI provides a beautiful map of human knowledge. But as any researcher knows, the map is not the territory. The actual "territory" is the messy, footnote-heavy, peer-reviewed primary sources.

When we lose the skill of searching—the ability to find the original source, to verify the DOI, to check the methodology of the paper being cited—we lose our connection to the territory. We become "Map-Readers" who are vulnerable to every hallucination and every bias baked into the system. Knowing how to search is the only way to verify that the ground beneath our feet is actually solid.


3. What do we do?  

It's not going to be a surprise, but we need to develop new research habits that assume "hallucination by default" and use adversarial validation techniques.

To move beyond simple "Q&A" and into high-quality AI searching, you have to treat the AI as a sophisticated but fallible partner. Practicing critical analysis and adversarial methods ensures you are extracting the most accurate information while guarding against "hallucinations" or biased patterns.

Here are three practical ways to level up your search game:

A. The "Devil’s Advocate" Cross-Examination

Instead of asking for the truth, ask the AI to defend a counter-intuitive or unpopular position. This forces the model to bypass its "standard" consensus-based response and reveals the complexity of a topic. You might well discover something that nobody else has noticed. 

The Method: Once the AI provides an answer, ask it to: 

"Identify the three strongest arguments against the conclusion you just provided, citing specific potential data gaps."

Why it works: It breaks the "echo chamber" effect. If the AI struggles to find counter-arguments, you know you need to switch to a different search tool to find dissenting views.

Adversarial Twist: Tell the AI: "I believe [X] is true. Your job is to convince me I am wrong using only verifiable historical or scientific data."

B The "Triangulation & Source Stress-Test"

High-quality searching involves verifying that the results the AI gives you aren't just "sounding" right. You can use adversarial prompting to make the AI audit its own logic.

The Method: After an AI search, use a Multi-Step Verification prompt:

"Summarize the consensus on [Topic]."

"Now, provide the names of three specific experts or organizations that would disagree with that summary."

"Explain why those experts might claim your previous summary is oversimplified."

Why it works: It forces the AI to look for "friction" in the data rather than just the smoothest path to an answer.  (And as my Yale students will attest, one needs friction in the intellectual work you're doing.  If you're just gliding along, you're probably not learning anything.  

C. The "Constraint-Based Fact-Checking" (Adversarial Prompting)

This method involves setting "traps" or strict rules to see if the AI relies on generic templates rather than actual search data.

The Method: Use a Negative Constraint prompt: 

"Explain the impact of [Event/Policy], but do not use any information or talking points that appeared in major news headlines in the last 48 hours. Focus only on academic or niche industry-specific data."

Why it works: By forbidding the most "obvious" or "available" information (that is, working to avoid the  Availability Bias effect), you force the AI to dig deeper into its training data or actively search live search results for more nuanced, less-discussed facts.

Practical Tip: Ask the AI to: "Compare these two perspectives and point out any logical fallacies present in either side."


Face it, we aren't librarians anymore; we are navigators in a high-speed, synthetic storm, and the rain isn't letting up. 

Knowing how to search is no longer a technical skill you delegate to a junior researcher. It is the primary way we exercise our agency. It is how we ensure that as the machines get smarter, the humans don't get lazier. Don't be lazy; look for the friction.  

In 2026, the most powerful person in the room isn't the one with the best AI; it’s the one who knows how to ask the question that the AI didn't expect.

So... Keep searching!  



Thursday, February 19, 2026

SearchResearch (2/19/26): Your path to deeper reading with AI tools

Reading tools have been around... 
A scholar at work. Not a self-portrait, but a nice example of how I see myself at work.
A bit of architectural sculpture found in the Sterling Library at Yale.


... for a long time. For years, I kept a well-thumbed dictionary close at hand so I could look up all those words I didn't quite know, or was slightly uncertain about.  (That's how you learn that a word like "peruse" is a  a contronym, a word with two opposite definitions.  The original meaning was "to read very carefully," but it has come to also mean the opposite: "to skim over lightly.")  

My dictionary led me to understand what words really mean--like polynya (a non-linear opening in the ice pack), or spezzatura (an Italian word that refers to a kind of effortless grace), or Rückenfigur (an image composition where a person's back is included in the scene, facing out to the view rather than at the viewer).  

Ever since smartphones became ubiquitous, I've always read with a phone nearby for much the same reason.  To look up things along the way.  I actually really like this ability to instantly look up things and the ability to get as much detail as I need, often with figures included.  




We can now extend this habit to include asking your favorite AI questions about the book you're reading, asking questions that are really difficult to search for with "classic" Googling.  

For example, I'm currently reading The Dark Forest by Cixin Liu, Part 2 of the "Remembrance of Earth's Past" trilogy.  It's a fun read, but I read Part 1 (The Three Body Problem) early last year,  That was a big book, and Part 2 is aksi a big book that's very dense with ideas and substory lines.  

(Spoiler warning: A detail is discussed below that you might want to skip if you're planning on reading the trilogy. Skip to the "Caution" below.)  

After a couple hundred pages, I realize that an important plot point is that the Trisolarians have, as a key part of their invasion stragegy, managed to block all important physics research taking place on Earth.  But for the life of me, I could not remember HOW they managed to accomplish this. 

To make things worse, I also managed to lose/misplace my copy of volume 1. Ugh. Now what?  I didn't want to read the Wikipedia page on the book as it might well contain spoilers.  

Then I realized I could ask my AI buddy this question and I'd probably get a decent answer.  So I whip out my phone, and ask this question: 



This is exactly what I needed to restore my memory about what happened in Book 1.  

Note that I was careful to ask a fairly specific question, not anything that might reveal upcoming plot points.  

Caution:  A VERY important skill to develop is the ability to NOT get sucked down the rabbit hole.  Yes, I know that clickbaity thing just demands to be checked-out, but don't do it.  Don't turn a lovely, engaging, wonderful reading experience into endless hours of slop-content reading.  

Hallucinations?  Maybe, but I find that the questions I ask of AI while reading tend to be fairly specific ("what's that?" or "when did this happen?" or "what's the connection between Person 1 and Person 1?"), so the probability of hallucinations is much less.  Usually my while-reading questions are an easy RAG ("Retrieval Augmented Generation") task, and they tend to have fewer errors like this.  

In early smartphone days, I would use it as a dictionary.  Then, as Wikipedia came easy available, I could look up specific topics (but having to avoid spoilers).  

Now I can ask fairly sophisticated questions of my AI buddy... and that's the way I think of it. As Ethan Mollick points out in his book Co-Intelligence, a very reasonable mental model is to consider an AI as a colleague, one who can answer questions about your work project.  In this case, my project is to read and understand a book.  




That's a useful bit of background.  

Or, while reading a scholarly article on The Rise and Fall of Plains Indian Horse Cultures, I could ask a question like this (because the author assumed that the reader would know this information implicitly--I am not his target audience):  


I have to admit that I didn't know what the Arkansas Basin was, including that it was huge--so this summary was great background material for me to read.  

Reading has always been about more than just sitting with the text on the page--good readers have always used external sources to amplify and enrich their understanding. Now, it's easier than ever.  Hope you take advantage.  


SearchResearch Lessons

There's one big lesson here... I now make it a habit to co-read with an AI partner, not to summarize, but to enhance my reading by giving me important background that I don't have.  I rely on the AI partner to answer questions about the material that I never understood in the first place, or to give my memory a boost... especially when reading long texts... especially when subsequent books are read years apart.  

I'm looking forward to re-reading (for the 4th time) the entire Lord of the Rings epic series... this time with AI augmentation.  (I know who Tom Bombadil is, but who is Gildor Inglorion?)

This time, Gemini can be my intelligent vademecum and fill me in on the backstory.    

Keep searching.