This website uses cookies

Read our Privacy policy and Terms of use for more information.

In partnership with

Hi, this is Ray.

Quick question to start. When you finish a study session, how do you know whether it went well?

I'd guess most of you answered with some version of "I just kind of feel it." It felt productive. It felt focused. You covered a lot of ground. The material seemed to make sense as you read it. You'd give yourself, what, a B+ for the session?

Now here's the uncomfortable thing the research has been telling us for about 50 years: that gut feeling is wildly unreliable. It's not just slightly off. It's systematically biased in specific, predictable ways that actively work against your learning. The sessions that FEEL the best are often the ones that produce the LEAST durable learning. And the sessions that feel hard, frustrating, and slow are often the ones doing the most actual work.

I learned this the embarrassing way. For years I'd finish a "great" study session feeling confident I'd absorbed the material, only to discover a week later that almost none of it had stuck. Meanwhile the sessions where I felt stupid and slow turned out to be the ones I remembered. Eventually I started actually tracking what I was doing (the sessions, the techniques, the outcomes) and the data made my self-perceptions look like complete fiction. The version of me that could be honest about my learning, with actual numbers, was a much more effective learner than the version of me running on vibes.

Today's newsletter is about how to become your own little learning lab. The science says self-monitoring genuinely works (I'll get to the data) but only if you track the right things. Track the wrong things and you'll just confirm your existing biases with more confidence. Track the right things and you'll surface patterns you didn't know existed. Let's get into it.

Why Your Gut Feeling About Your Learning Is Wrong

Let me lay this out, because it's the foundation everything else rests on. The research on metacognitive accuracy (how well you can judge your own learning) is genuinely brutal. Most people are bad at this. Not slightly bad. Really bad.

A study comparing online and offline measures of metacognition in university students found something striking. According to the researchers, no significant correlation existed between online monitoring judgments (real-time confidence ratings) and offline self-reports (general beliefs about your own metacognition), even when the offline measure was domain-specific… but online measures were strongly related to actual task performance, while offline measures were not. Translation: people's general beliefs about how good they are at studying have almost no relationship to how well they actually study. The "I'm a fast learner" or "I'm bad at math" stories you've been telling yourself for years don't predict your actual performance. Real-time monitoring during the task does. Stories don't. Data does.

This is why the "vibes" approach to studying fails so consistently. As one summary of self-regulated learning research put it, students' default study strategies can create a false sense of fluency during learning, and we need to break students' bad study habits… the first step is to build a knowledgeable student who understands appropriate strategies, and the next is to motivate them to use effortful strategies even when they perceive the costs as high. Note the phrase "false sense of fluency." When you reread your notes and the material feels familiar, that familiarity feels like learning. It isn't. It's just familiarity. Your brain is fooling you. The only way to break this trap is to introduce external measurement that doesn't lie.

One editor for writers, developers, and agents

Your docs have more contributors than ever. Engineers, PMs, support, marketing, and now AI agents. But most documentation tools force a choice: an accessible editor for the whole team, or the rigor of git-based version control for developers. That tradeoff slows everyone down.

Mintlify's editor removes the tradeoff. Writers get a visual WYSIWYG experience with slash commands and editable navigation. Developers keep their git-native workflow. Every visual edit is a clean commit, every commit appears in the editor. Changes flow both ways.

The editor also brings live collaboration and AI agents as first-class contributors:

  • WYSIWYG editing with no markdown syntax required

  • Real-time multiplayer for war room-style doc sessions

  • MCP support so your AI can edit alongside your team

  • Two-way git sync that preserves a single source of truth

The best documentation is written by everyone who has context. That's your whole team. And now, your agents. Try it at mintlify.com.

The Self-Monitoring Effect (It's Real and Substantial)

Now the encouraging news. When learners actually start tracking what they're doing, performance improves measurably.

A meta-analysis of 36 experimental studies covering over 2,600 students examined exactly this. According to the researchers, self-monitoring intervention had positive, moderate effects on strategy use (Hedges' g = 0.38) and academic performance (Hedges' g = 0.47), and the effects were greater when self-monitoring was embedded into a multi-component intervention. An effect size of 0.47 in education research is genuinely large. That's the kind of effect that would make a teaching technique famous. It's the effect of just paying attention to what you're doing while you study, and writing some of it down.

Why does this work? The mechanism is fascinating. Once you start tracking your studying, you do two things simultaneously. First, you start collecting actual data instead of running on biased self-perception. Second, the act of tracking ITSELF changes behavior, because nobody wants to write down a session they're embarrassed by. The two effects compound. You learn what's actually working, AND you start doing more of what works because you're watching yourself.

A separate study on metacognitive monitoring summarized the cycle elegantly: metacognitive learners often monitor their performance and subsequently control their learning… for instance, a student learning to read may monitor their performance, deem it inadequate, and control it by deciding to practice 30 minutes longer each day. Monitoring informs control. Without the monitoring, the control is just guessing. With the monitoring, the control becomes targeted. You're no longer "studying more." You're studying differently in specific, measured ways. That's the whole upgrade.

What to Actually Track (The Useful Metrics)

Here's where most "track your studying" advice goes wrong. People start tracking everything: hours studied, pages read, problems completed… and end up with a giant spreadsheet that confirms they're working hard without telling them whether the work is producing results. Hours of input is a vanity metric. Output and retention are the metrics that matter.

Here are the categories I've found actually move the needle, with what to track in each.

Category 1: Retention Tests (The Truth Detector)

This is the most important category. Without retention data, everything else is theater. The fix is simple: regularly test yourself on material you studied days or weeks ago, not what you just studied. Track the percentage you actually remember.

The implementation: every Sunday, run a 15-20 minute self-quiz on material from the previous week or two. Use active recall… write down what you remember from memory before checking your notes. Track the percentage correct. Over time, this gives you a real signal of how much is actually sticking.

The shocking part, when you start doing this, is how much LESS you retain than you thought. The first few weeks are humbling. The good news is the metric responds quickly to changes in technique. Switch from rereading to active recall, and the retention numbers jump within a few weeks. Add spaced repetition, and they jump again. The data tells you what's working.

Category 2: Session Quality (Not Quantity)

Hours studied tells you almost nothing useful. Hours of GENUINELY focused study tells you something important. The difference matters.

What to track: at the end of each session, rate it 1-5 on focus quality (1 = mostly distracted, 5 = locked in throughout). Also note the session length, the time of day, what you did right before, and what conditions you were in. Did you walk before? Eat? Sleep well last night? Have caffeine? Were you in your usual study spot?

Over a few weeks of this, patterns emerge. You'll discover things like "my afternoon sessions are consistently a 2 unless I go for a walk first, in which case they're a 4." Or "Sundays are unusable for me, I should just rest then." Or "anything after my third coffee is fake productivity." These patterns are personal, often surprising, and only visible when you actually write them down. Without tracking, you can't see them. You're just running on a vague sense that "studying is hard sometimes."

Category 3: Confidence vs. Performance Calibration

This is the most useful one for breaking the false-fluency trap I mentioned earlier. The technique: before any quiz or test, predict your score. Then take the quiz. Compare predicted to actual.

Almost everyone, when they start doing this, discovers their predictions are systematically off. Most people overestimate. Some people consistently underestimate. The pattern itself is useful. If you're over by 20%, you've quantified your false-fluency bias. Now you can adjust. "I think I know this material, which based on my track record means I probably know about 60% of what I think I know. Better study more before the exam." The calibration improves over time as you track it. You become a more accurate self-judge of your own learning. As one study on metacognitive monitoring noted, monitoring prompts (like "how well do you think you have learned the material?") can support learning because they force the learner to evaluate their own state, which subsequently informs strategic decisions. Forcing the prediction is itself a learning intervention.

Category 4: Time-to-Mastery Per Concept

For each significant concept or skill in what you're learning, roughly track how long it took to actually master it (defined as: you can explain it clearly, you can apply it without help, you remember it a week later).

This metric reveals which topics are giving you trouble and which are coming easily. Over time, it also reveals patterns in HOW you learn best. Concepts you mastered fastest were probably ones where you used the right combination of techniques. Concepts that took forever were probably attempted with mediocre techniques. The data points at what's working for you specifically, not in general.

Category 5: External Indicators

Some of the best learning data comes from outside you. If you can get them, track:

  • Test scores or quiz results, including any practice tests

  • Feedback from teachers, mentors, or peers (specific, written down, dated)

  • Real-world application moments where you tried to USE what you learned and saw what worked or didn't

  • For language learners: time to complete a conversation in your target language, or comprehension percentage on a test

External indicators are the truth-tellers. They don't care about your feelings about your studying. They just measure outcomes. Always weight them more heavily than your internal sense of how things are going.

The Tracking System (Keep It Stupid Simple)

Here's the actual system I use, after experimenting with elaborate spreadsheets, dedicated apps, and complex frameworks. The system that survived: stupidly simple.

A single note on my phone or in a notebook. At the end of each study session, I write three lines:

  1. What I did (topic, technique, duration)

  2. Session quality (1-5, plus one sentence on conditions)

  3. What I'll test next week (which concept I want to verify retention on)

Once a week, in 20 minutes:

  1. Run a self-quiz on items from "test next week" entries

  2. Write down the actual retention percentage

  3. Look at the patterns from the past week's session quality entries

  4. Adjust ONE thing for the coming week based on what I see

That's it. Total weekly time investment: maybe 30 minutes including the daily entries. The information density is wildly higher than the elaborate systems I tried before. Simple beats complicated, in tracking as in everything else.

The single most important rule: don't let the tracking become more work than the studying. The moment your metrics system requires significant effort to maintain, you'll abandon it. The system has to be light enough that you actually do it. Three lines a day plus a weekly check-in is the floor that I've found I can sustain. Beyond that, it falls apart.

What to Actually Do With the Data

Tracking without action is journaling. Action without tracking is guessing. The combination is where the magic happens. Here's the loop:

Weekly: Look at the data. Identify the ONE biggest signal. Maybe it's "my afternoon sessions are useless." Maybe it's "the topic I covered three times still has 30% retention." Maybe it's "I feel confident going into self-quizzes and consistently underperform my prediction by 15%." Pick the one biggest signal.

Adjust ONE variable. Not five. One. Move afternoon sessions to morning. Add active recall to the topic with poor retention. Spend more time studying material I'm overconfident about. The single-variable change is critical because if you change five things at once and your scores improve, you have no idea which change actually helped. Change one. Measure for a week or two. Then change another.

Keep what works. Drop what doesn't. Over time, your study system gets better because you've actually figured out what helps YOUR specific brain learn. Not what some YouTube productivity person says. Not what worked for your friend. What works for you, in your context, with your material. The data tells you. You just have to listen.

The Bigger Lesson

Here's what I want you to take from all this. Most of us study based on a combination of habits inherited from school, advice we read somewhere, and vague feelings about whether something is working. None of these are reliable. The result is hours of effort producing significantly less learning than the same hours could produce with even rough self-monitoring.

The fix isn't a new app, a new technique, or a new study schedule. The fix is paying attention to what you're actually doing and what's actually happening as a result. When you start treating yourself as a learner you can study (with curiosity, with data, with honest measurement) you stop wasting hours on what doesn't work and start focusing on what does. The compounding over months and years is significant.

You don't need to be a quantified-self obsessive. You don't need a complicated system. You need three lines a day and a weekly check-in. That's enough to surface patterns you'd never see otherwise, and to start making small adjustments that compound into substantially better learning over time.

The research is clear. The effect size is real. The implementation is genuinely simple. Most learners just never start, because tracking feels nerdy and writing down "I studied for 90 minutes today" feels embarrassing. Get over the embarrassment. The data will repay you many times over.

Be your own little learning lab. The findings will surprise you.

Keep learning (and keep tracking),

Ray

Keep Reading